I have been taking notes as I go through Kubernetes in Action by Marko Lukša and I wanted to share these with those who might have similar interests in containerization and distributed systems in general. This is the 1st installment of a series called Kube in Action. Every week or so, I’ll be summarizing and exploring kubernetes fundamentals + concepts with hands-on examples as I learn more about Kubernetes.
- Kubernetes abstracts away the hardware infrastructure and exposes your whole datacenter as a single enormous computational resource.
- It allows you to deploy and run your software components without having to know about the actual servers underneath.
- When deploying a multi-component application through Kubernetes, it selects a server for each component, deploys it, and enables it to easily find and communicate with all the other components of your application.
This makes Kubernetes great for most on-premises datacenters, but where it starts to shine is when it’s used in the largest datacenters, such as the ones built and operated by cloud providers.
Kubernetes is both a consequence of
- Splitting big monolithic apps into smaller microservices and
- of the changes in the infrastruction that runs those apps.
Microservices communicate through synchronous protocols such as HTTP, over which they usually expose RESTful (REpresentational State Transfer) APIs, or through asynchronous protocols such as AMQP (Advanced Message Queueing Protocol). These protocols are simple, well understood by most developers, and not tied to any specific programming language. Each microservice can be written in the language that’s most appropriate for implementing that specific microservice.
Components in a microservices architecture aren’t only deployed independently, but are also developed that way. Because of their independence and the fact that it’s common to have separate teams developing each component, nothing impedes each team from using different libraries and replacing them whenever the need arises.