A few years back, the cloud community went through a kind of container craze. Everything you read seemed to be about containers and how they were going to revolutionize the software development industry. The thing is, containers are just, well, containers. They contain all the elements needed to run a particular process, which dramatically simplifies software by making it smaller and self-contained, but by themselves they’re just a small part of something bigger. That bigger thing, as I covered in a recent blog, is called a microservice.
How Dynamic Orchestration, Cloud Containers, and Microservices Work Together
Microservices will revolutionize software development as everything moves to the cloud. But here again, they won’t do it alone. If you’re familiar will the requirements of a cloud-native architecture, you’ll recall there are three key requirements of a cloud-native framework: containers, microservices and dynamic orchestration. This last requirement is absolutely critical, because it’s where the value of containers and microservices is realized. Small, agile and self-contained is very important, but only provided you can automate and perpetuate the software lifecycle.
If you think of the lifecycle of software, there are three basic stages: birth (deployment), midlife (updates, changes) and death (removal). In the current telco world, each stage requires a lot of effort, which tends to stifle innovation and adaptation. With dynamic orchestration, you can automate these stages so that new software versions can be instantly created, updated and removed based on pre-defined criteria including real-time traffic demands. You read that right: with dynamic orchestration, you can actually have a system that knows when to create or retire new instances of microservices and does so automatically because you’ve already told it how to behave under certain conditions.
Kubernetes & Helm: Key Tools for Dynamic Orchestration
The keys to dynamic orchestration consist of one relatively new software tool, Kubernetes, and one very new software tool, Helm. Together, Kubernetes and Helm allow software teams to run containers in a contextual group called a pod and apply declarative statements to manage the behavior of those pods under certain conditions. In the case of Affirmed’s virtualized Access and Mobility Management Function (AMF), for example, the AMF pod features three separate containers: the AMF call control container, the service mesh communication proxy container (what is referred to as a helper container) and Affirmed’s infrastructure container featuring multiple supplemental sidecars (a Kubernetes term) with supporting services. Kubernetes groups pod types like these together and sets rules or conditions regarding the AMF network function that automatically apply to all instances of it in the network.
The role of Helm is to provide Kubernetes with a system-level view of the network, giving Kubernetes a kind of built-in intelligence. Compared to a VNF manager telling OpenStack what to do (something called an imperative command), Helm shares intelligence about pod types and the rules/conditions that have been declared as ideal (e.g., use up to 80 percent of CPU for a particular pod type), so that Kubernetes can dynamically and automatically do the right thing under the right circumstances.
Kubernetes and Helm are a significant improvement in the cloud world. In fact, the chief mission of the Cloud Native Computing Foundation (CNCF) is to promote the adoption of Kubernetes for all cloud computing environments. Relative to OpenStack plus one (or more) VNF managers, Kubernetes + Helm is much more efficient in terms of lightness and agility. Where OpenStack is reactive—i.e., it only does what it’s told to do—Kubernetes is proactive. Told once, Kubernetes remembers what the ideal state is for a particular pod type and will automatically take corrective actions to achieve that state when possible.
The idea of dynamically automated network has been a holy grail for a long time. What Kubernetes + Helm does is invite telco software developers and their customers to the non-stop, automatic party of the 5G future. We hope we’ll see you there.