Kubernetes 101: The What and Why

What is Kubernetes and why everyone seems to be talking about it?

Dimitris Poulopoulos
4 min readDec 10, 2020
Image by chenspec from Pixabay

Kubernetes is a buzzword in the infrastructure world for a few years now. Apart from being a Greek word to describe someone who is in command, Kubernetes is arguably the most successful and rapidly growing open-source project.

But why is that? What is Kubernetes and what issues it tries to solve? This story examines the whats and whys of Kubernetes, serving as an introduction to the world of containers and orchestration.

Learning Rate is my weekly newsletter for those who are curious about the world of AI and MLOps. You’ll hear from me every Friday with updates and thoughts on the latest AI news, research, repos and books. Subscribe here!

What is Kubernetes?

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services. It facilitates both declarative configuration and automation, letting us run distributed system resiliently that scale to meet users’ demand.

In simpler terms, Kubernetes is a container orchestrator that helps make sure that each container is where it’s supposed to be and that containers can talk to each other.

If you still feel lost, it’s fine. Sometimes you’ve got to understand the why, what’s the problem and how this new shiny thing provides a solution. To this end, let’s examine what is the purpose of Kubernetes.

Why Kubernetes?

Some applications batch all their functionality, such as different services, transactions, or third party integration, into a single deployable artifact. These applications are called monoliths.

This design makes a system rigid and challenging to upgrade since everything has to roll out altogether. Suppose a team of developers working on a service x have finished their work. In that case, they cannot do anything but wait for developers of team y, who are working on a completely different aspect of the application, to also complete their work. Scaling also has the same problem, as engineers have to provide more resources to the whole application, even though the bottleneck is only in a single area.

To address these challenges, we turned to microservices; the system’s functionality is split into reusable pieces of code, each responsible for a single operation. If we want to update part of the application, we could easily update only the service accountable. Moreover, we can scale individual services to match the performance our users expect without overprovisioning.

Thus, distributed computing started gaining ground, and containers were in the right spot at the right time. With containers, the developers get to package their services with all their dependencies and configurations. So, developers are now confident that their services can run the same way, no matter the underlying infrastructure.

However, there are still some issues that need addressing:
- Updating a container is easy, but how can we do it without downtime?
- How can containers know how to talk to each other?
- How can we monitor the system’s performance and debug issues?

What we needed is an orchestration system that could automate all the mundane work. Specifically, the idea system should:

  • Schedule workloads (i.e., containers) on different nodes
  • Monitor and react to node and container health issues
  • Provide storage, networking, proxies, security, and logging
  • Be declarative instead of imperative
  • Be extensible

Here is where Kubernetes comes into play; like an orchestra conductor, Kubernetes manages the life-cycle of containers on different nodes, which can be physical or virtual systems. These nodes are grouped together as a cluster, and each container has endpoints, DNS, storage, and scalability. And Kubernetes is there to automate all this repetitive labor.

Conclusion

Kubernetes is the buzz word in the infrastructure world for a couple of years now. In this story, we examined what is Kubernetes and what issues it tries to address. Finally, we described a birds-eye view of its architecture and the components that form its flesh.

In upcoming articles, we will dive deeper, create applications, manage their lifecycle, and explain several Kubernetes features in finer detail.

Learning Rate is my weekly newsletter for those who are curious about the world of AI and MLOps. You’ll hear from me every Friday with updates and thoughts on the latest AI news, research, repos and books. Subscribe here!

About the Author

My name is Dimitris Poulopoulos, and I’m a machine learning engineer working for Arrikto. I have designed and implemented AI and software solutions for major clients such as the European Commission, Eurostat, IMF, the European Central Bank, OECD, and IKEA.

If you are interested in reading more posts about Machine Learning, Deep Learning, Data Science, and DataOps, follow me on Medium, LinkedIn, or @james2pl on Twitter.

Opinions expressed are solely my own and do not express the views or opinions of my employer.

--

--

Dimitris Poulopoulos

Machine Learning Engineer. I talk about AI, MLOps, and Python programming. More about me: www.dimpo.me