Sometimes you do need Kubernetes! But how should you decide?

At RisingStack, we help companies to adopt cloud-native technologies, or if they have already done so, to get the most mileage out of them.

Recently, I've been invited to Google DevFest to deliver a presentation on our experiences working with Kubernetes.

Below I talk about an online learning and streaming platform where the decision to use Kubernetes has been contested both internally and externally since the beginning of its development.

The application and its underlying infrastructure were designed to meet the needs of the regulations of several countries:

  • The app should be able to run on-premises, so students’ data could never leave a given country. Also, the app had to be available as a SaaS product as well.

  • It can be deployed as a single-tenant system where a business customer only hosts one instance serving a handful of users, but some schools could have hundreds of users.

  • Or it can be deployed as a multi-tenant system where the client is e.g. a government and needs to serve thousands of schools and millions of users.

The application itself was developed by multiple, geographically scattered teams, thus a Microservices architecture was justified, but both the distributed system and the underlying infrastructure seemed to be an overkill when we considered the fact that during the product's initial entry, most of its customers needed small instances.

Was Kubernetes suited for the job, or was it an overkill? Did our client really need Kubernetes?

Let’s figure it out.

(Feel free to check out the video presentation, or the extended article version below!)

Let's talk a bit about Kubernetes itself!

Kubernetes is an open-source container orchestration engine that has a vast ecosystem. If you run into any kind of problem, there's probably a library somewhere on the internet that already solves it.

But Kubernetes also has a daunting learning curve, and initially, it's pretty complex to manage. Cloud ops / infrastructure engineering is a complex and big topic in and of itself.

Kubernetes does not really mask away the complexity from you, but plunges you into deep water as it merely gives you a unified control plane to handle all those moving parts that you need to care about in the cloud.

So, if you're just starting out right now, then it's better to start with small things and not with the whole package straight away! First, deploy a VM in the cloud. Use some PaaS or FaaS solutions to play around with one of your apps. It will help you gradually build up the knowledge you need on the journey.

So you want to decide if Kubernetes is for you.

First and foremost, Kubernetes is for you if you work with containers! (It kinda speaks for itself for a container orchestration system). But you should also have more than one service or instance.

kubernetes-is-for-you-if

Kubernetes makes sense when you have a huge microservice architecture, or you have dedicated instances per tenant having a lot of tenants as well.

Also, your services should be stateless, and your state should be stored in databases outside of the cluster. Another selling point of Kubernetes is the fine gradient control over the network.

And, maybe the most common argument for using Kubernetes is that it provides easy scalability.

Okay, and now let's take a look at the flip side of it.

Kubernetes is not for you if you don't need scalability!

If your services rely heavily on disks, then you should think twice if you want to move to Kubernetes or not. Basically, one disk can only be attached to a single node, so all the services need to reside on that one node. Therefore you lose node auto-scaling, which is one of the biggest selling points of Kubernetes.

For similar reasons, you probably shouldn't use k8s if you don't host your infrastructure in the public cloud. When you run your app on-premises, you need to buy the hardware beforehand and you cannot just conjure machines out of thin air. So basically, you also lose node auto-scaling, unless you're willing to go hybrid cloud and bleed over some of your excess load by spinning up some machines in the public cloud.

kubernetes-is-not-for-you-if

If you have a monolithic application that serves all your customers and you need some scaling here and there, then cloud service providers can handle it for you with autoscaling groups.

There is really no need to bring in Kubernetes for that.

Let's see our Kubernetes case-study!

Maybe it's a little bit more tangible if we talk about an actual use case, where we had to go through the decision making process.

online-learning-platform-kubernetes

Online Learning Platform is an application that you could imagine as if you took your classroom and moved it to the internet.

You can have conference calls. You can share files as handouts, you can have a whiteboard, and you can track the progress of your students.

This project started during the first wave of the lockdowns around March, so one thing that we needed to keep in mind is that time to market was essential.

In other words: we had to do everything very, very quickly!

This product targets mostly schools around Europe, but it is now used by corporations as well.

So, we're talking about millions of users from the point we go to the market.

The product needed to run on-premise, because one of the main targets were governments.

Initially, we were provided with a proposed infrastructure where each school would have its own VM, and all the services and all the databases would reside in those VMs.

Handling that many virtual machines, properly handling rollouts to those, and monitoring all of them sounded like a nightmare to begin with. Especially if we consider the fact that we only had a couple of weeks to go live.

After studying the requirements and the proposal, it was time to call the client to..

Discuss the proposed infrastructure.

So the conversation was something like this:

  • "Hi guys, we would prefer to go with Kubernetes because to handle stuff at that scale, we would need a unified control plane that Kubernetes gives us."

  • "Yeah, sure, go for it."

And we were happy, but we still had a couple of questions:

  • "Could we, by any chance, host it on the public cloud?"

  • "Well, no, unfortunately. We are negotiating with European local governments and they tend to be squeamish about sending their data to the US. "

Okay, anyways, we can figure something out...

Okay, crap! But we still needed to talk to the developers so all was not lost.

Let's call the developers!

It turned out that what we were dealing with was an usual microservice-based architecture, which consisted of a lot of services talking over HTTP and messaging queues.

Each service had its own database, and most of them stored some files in Minio.

kubernetes-app-architecture

In case you don't know it, Minio is an object storage system that implements the S3 API.

Now that we knew the fine-grained architectural layout, we gathered a few more questions:

  • "Okay guys, can we move all the files to Minio?"

  • "Yeah, sure, easy peasy."

So, we were happy again, but there was still another problem, so we had to call the hosting providers:

  • "Hi guys, do you provide hosted Kubernetes?"

  • "Oh well, at this scale, we can manage to do that!"

So, we were happy again, but..

Just to make sure, we wanted to run the numbers!

Our target was to be able to run 60 000 schools on the platform in the beginning, so we had to see if our plans lined up with our limitations!

We shouldn't have more than 150 000 total pods!

10 (pod/tenant) times 6000 tenants is 60 000 Pods. We're good!

We shouldn't have more than 300 000 total containers!

It's one container per pod, so we're still good.

We shouldn't have more than 100 pods per node and no more than 5 000 nodes.

Well, what we have is 60 000 pods over 100 pod per node. That's already 6 000 nodes, and that's just the initial rollout, so we're already over our 5 000 nodes limit.

kubernetes-limitations

Okay, well... Crap!

But, is there a solution to this?

Sure, it's federation!

We could federate our Kubernetes clusters..

..and overcome these limitations.

We have worked with federated systems before, so Kubernetes surely provides something for that, riiight? Well yeah, it does... kind of.

It's the stable Federation v1 API, which is sadly deprecated.

kubernetes-federation-v1

Then we saw that Kubernetes Federation v2 is on the way!

It was still in alpha at the time when we were dealing with this issue, but the GitHub page said it was rapidly moving towards beta release. By taking a look at the releases page we realized that it had been overdue by half a year by then.

Since we only had a short period of time to pull this off, we really didn't want to live that much on the edge.

So what could we do? We could federate by hand! But what does that mean?

In other words: what could have been gained by using KubeFed?

Having a lot of services would have meant that we needed a federated Prometheus and Logging (be it Graylog or ELK) anyway. So the two remaining aspects of the system were rollout / tenant generation, and manual intervention.

Manual intervention is tricky. To make it easy, you need a unified control plane where you can eyeball and modify anything. We could have built a custom one that gathers all information from the clusters and proxies all requests to each of them. However, that would have meant a lot of work, which we just did not have the time for. And even if we had the time to do it, we would have needed to conduct a cost/benefit analysis on it.

The main factor in the decision if you need a unified control plane for everything is scale, or in other words, the number of different control planes to handle.

The original approach would have meant 6000 different planes. That’s just way too much to handle for a small team. But if we could bring it down to 20 or so, that could be bearable. In that case, all we need is an easy mind map that leads from services to their underlying clusters. The actual route would be something like:

Service -> Tenant (K8s Namespace) -> Cluster.

The Service -> Namespace mapping is provided by Kubernetes, so we needed to figure out the Namespace -> Cluster mapping.

This mapping is also necessary to reduce the cognitive overhead and time of digging around when an outage may happen, so it needs to be easy to remember, while having to provide a more or less uniform distribution of tenants across Clusters. The most straightforward way seemed to be to base it on Geography. I’m the most familiar with Poland’s and Hungary’s Geography, so let’s take them as an example.

Poland comprises 16 voivodeships, while Hungary comprises 19 counties as main administrative divisions. Each country’s capital stands out in population, so they have enough schools to get a cluster on their own. Thus it only makes sense to create clusters for each division plus the capital. That gives us 17 or 20 clusters.

So if we get back to our original 60 000 pods, and 100 pod / tenant limitation, we can see that 2 clusters are enough to host them all, but that leaves us no room for either scaling or later expansions. If we spread them across 17 clusters - in the case of Poland for example - that means we have around 3.500 pods / cluster and 350 nodes, which is still manageable.

This could be done in a similar fashion for any European country, but still needs some architecting when setting up the actual infrastructure. And when KubeFed becomes available (and somewhat battle tested) we can easily join these clusters into one single federated cluster.

Great, we have solved the problem of control planes for manual intervention. The only thing left was handling rollouts..

gitops-kubernetes-federation

As I mentioned before, several developer teams had been working on the services themselves, and each of them already had their own Gitlab repos and CIs. They already built their own Docker images, so we simply needed a place to gather them all, and roll them out to Kubernetes. So we created a GitOps repo where we stored the helm charts and set up a GitLab CI to build the actual releases, then deploy them.

From here on, it takes a simple loop over the clusters to update the services when necessary.

The other thing we needed to solve was tenant generation.

It was easy as well, because we just needed to create a CLI tool which could be set up by providing the school's name, and its county or state.

tenant-generation-kubernetes

That's going to designate its target cluster, and then push it to our Gitops repo, and that basically triggers the same rollout as new versions.

We were almost good to go, but there was still one problem: on-premises.

Although our hosting providers turned into some kind of public cloud (or something we can think of as public clouds), we were also targeting companies who want to educate their employees.

Huge corporations - like a Bank - are just as squeamish about sending their data out to the public internet as governments, if not more..

So we needed to figure out a way to host this on servers within vaults completely separated from the public internet.

kubespray

In this case, we had two main modes of operation.

  • One is when a company just wanted a boxed product and they didn't really care about scaling it.

  • And the other one was where they expected it to be scaled, but they were prepared to handle this.

In the second case, it was kind of a bring your own database scenario, so you could set up the system in a way that we were going to connect to your database.

And in the other case, what we could do is to package everything — including databases — in one VM, in one Kubernetes cluster. But! I just wrote above that you probably shouldn't use disks and shouldn't have databases within your cluster, right?

However, in that case, we already had a working infrastructure.

Kubernetes provided us with infrastructure as code already, so it only made sense to use that as a packaging tool as well, and use Kubespray to just spray it to our target servers.

It wasn't a problem to have disks and DBs within our cluster because the target were companies that didn't want to scale it anyway.

So it's not about scaling. It is mostly about packaging!

Previously I told you, that you probably don't want to do this on-premises, and this is still right! If that's your main target, then you probably shouldn't go with Kubernetes.

However, as our main target was somewhat of a public cloud, it wouldn't have made sense to just recreate the whole thing - basically create a new product in a sense - for these kinds of servers.

So as it is kind of a spin-off, it made sense here as well as a packaging solution.

Basically, I've just given you a bullet point list to help you determine whether Kubernetes is for you or not, and then I just tore it apart and threw it into a basket.

And the reason for this is - as I also mentioned:

Cloud ops is difficult!

There aren't really one-size-fits-all solutions, so basing your decision on checklists you see on the internet is definitely not a good idea.

We've seen that a lot of times where companies adopt Kubernetes because it seems to fit, but when they actually start working with it, it turns out to be an overkill.

If you want to save yourself about a year or two of headache, it's a lot better to first ask an expert, and just spend a couple of hours or days going through your use cases, discussing those and save yourself that year of headache.

In case you're thinking about adopting Kubernetes, or getting the most out of it, don't hesitate to reach out to us at [email protected], or by using the contact form below!

Leave a Reply

Your email address will not be published. Required fields are marked *