In a previous post "Clustering: Kubernetes in a nutshell", we suggest three layers to understand Kubernetes:
- Functional layer
- Conceptual layer
- System layer
Functional layer
You will find a quick insight of Kubernetes functional layer on the landing page of its official site.
At the functional level, Kubernetes offers the following services:
- Service discovery and load balancing
- Storage orchestration
- Automated rollouts and rollbacks
- Batch execution
- Automatic bin packing
- Self-healing
- Secret and configuration management
- Horizontal scaling
Orchestrators add a level of abstraction above infrastructure so that Devops teams manage resources in a coordinated and coherent manner. A devops team deploy applications that are composed of many services (or container) disposed and connected in a specific way. The orchestrator is like a conductor:
- from his central place in the orchestra, he has a complete view over the musicians
- he communicates with every musicians
- he give instructions to the whole orchestra, a group of musicians or even a single instrument
- he beats time
- beforehand, he prepares the representation, he chooses the program, the partitions, and the structure of the orchestra
Devops team programs the orchestrator to deliver services by deploying and upgrading containers (3), providing them with a persistency layer, managing connections between containers, and between containers and the outside world (1), scaling up and down (5, 8), dealing with error and un availability.
Conceptual layer
The conceptual layer is explained in the following Wikipedia resources.
The application domain is a vertical and hierarchical domain:
- Applications (Services) are made of pods
- Pods are made of collocated containers
The Kubernetes service domain is made of cross cutting concerns:
- Volumes offer data persistence for pods
- Secrets manage secrets (key and cipher / decipher) for pods
- Deployment manage the horizontal scaling of pods
At the root of these domains, the namespaces provide logic isolation in a multi-tenant environment.
System layer / Core components
Kubernetes documentation outlines its system layer:
- controllers (master components), one or more, execute the master components :
- API-server : kube-apiserver
- Configuration and inventory database : etcd
- Workload scheduling and placement : kube-scheduler
- a control-loop : kube-controller-manager
- workers execute the tasks
- kubelet is the agent that gets information and instruction and gives feedback to the controller
- kube-proxy implements network connectivity
- container runtime runs the tasks assigned to the worker
- Addons = cross cutting concernes
- DNS
- Web UI (dashboard)
- Container resources monitoring
- Cluster-level logging
Skillset
The three levels above corresponds to three job roles:
- Operator (functionnal layer) : someone who operates the interfaces to deploy and monotor applications
- Administrator (functional and conceptual layer) : someone who has an insight on the conceptual model and troubleshoots, tunes Kubernetes deployment
- Archtect (functional, conceptual ans system layer) : someone who knows how to design and integrate a Kubernetes cluster
Kubernetes anti-patterns
Anti-pattern #1: traditional architecture
Kubernetes is focused on micro-service architectures. If your application follows a more conventional design, with an application server instance and a traditional database, and, worse, if you deploy your applications manually, the road will be long to Kubernetes and you may consider a stop at Docker on the road.
Anti-pattern #2: VMs without containers
If your applications are statefull, and not containerized, you have to jump over two hurdles, containerization and kubernetization. Start by containerization. You'll think about kubernetization later.
Anti-pattern #3: too small a farm
Distributed environment are complex beasts that requires spaces and energy.
The controller presented above embarks at least 4 services + redundancy. RancherOS publishes a very interesting, yet simple, documents about the design of a production cluster.
You need at least three controller servers. If you decide to build a more robust architecture, where etcd is isolated from the other components of the controller, six servers are required for the control plan. If you manage only 30 workers with 6 servers/controller, the overhead is 20%.
If you're not a big farmer, choose a lighter tractor.
Anti-pattern n°4: localized storage
Traditional file server and databases are statefull and attached to a local storage. In a simple architecture, a RAID pool of drives is attached to servers. In a datacenter architecture, storage is a service offered either by NAS (NFS, i-SCSI) or SAN attached drives. Moving an application from a server to another server is possible, but requires special care to free resources and maintain data integrity.
Migrating to Kubernetes leads you to reconsider your storage layer. Kubernetes offers many storage classes, but all classes are not equal. Distributed object storage are better solutions than traditional solution because many functionalities of Kubernetes are not available when data are too localized.
Anti-pattern n°5: no dynamic high-speed LAN
You burn more gas but you don't drive faster in a sport car than in a touring car when traffic is congested.
In distributed environments, moving workloads and data is fast when the LAN is fast.
Think about the network design you opted in when you build the VMWare or HyperV clusters at your company : dedicated network ports, high speed ethernet, high-end switches. Remove "VMWAre" or "HyperV" and replace it with "Kubernetes" in the previous sentence and you know what you have to do to build the right network for your Kubernetes cluster!
Anti-pattern n°6: cluster already embedded in your middleware
Cassandra, CoucheBase, MongoDB already embark their own cluster solution. These clusters are flexible, well etablished, and powerful. Think twice before switching these clusters to Kubernetes. What is your business case ? Blending these services with other distributed services, or orchestrating various database engines makes, perhaps, a strong case for Kubernetes, but think about it twice before jumping in.
Kubernetes use case
Use case n°1 : cloudified infrastructure
If your organization moved its infrastructure to a cloud provider, Kubernetes API is a good candidate to standardize DevOps scripts. Every cloud provider offers Kubernetes service. You may even build an heterogeneous cloud, with multi-vendor and on premise resources.
This use case is very similar to openstack use case that we mention in a previous post.
Use case n°2 : cloud provider
If your organization is a cloud provider, Kubernetes is probably the more standardized API to manage containers.
Use case n°3 : micro-services or containers
If your organization manages many micro-service based applications or many containerized applications, Kubernetes is a good bet, among other cluster technologies we mentioned in a previous post.
Conclusion
Kubernetes is still in its early stages. It is moving rapidly in the hope of becoming a standard de facto for distributed workloads orchestration. Use cases will evolve too, as the technology matures. We hope that, in less than a year, we'll write another post with many new relevant use cases.