Home Archives for Ron Parker

Author: Ron Parker

Using Containers Cloud Architecture without Virtualization: Isn’t it Ironic?

by Ron Parker Ron Parker No Comments

The typical network transformation journey would look something like this: Linux, VMs, Containers. But this blog is about the road less taken, and how service providers can pass virtualization by using containers and go directly to the cloud.

That’s kind of a revolutionary concept. After all, many in IT have been trained to view virtualization as a necessary evolutionary step. Everything is more efficient in a virtualized environment, we were told. And then containers came along. The new reality is that you don’t need virtual machines to run containers. In fact, there are many cases where virtualization actually hurts the performance of a containerized application. In this article, we discuss the advantages of using containers vs. virtual machines.

Comparing Virtualization vs. Container Management Platforms

How can virtualization be a bad thing? Well, virtualization is great if you need to move and share applications between different physical servers, but it comes at a cost: about 10% of a server’s CPU is dedicated to running the virtual OS. Containers, by contrast, invoke the services they need from their cloud service provider: the storage, load balancing, and auto-scaling services in particular. And that frees up space on the server, which results in much faster performance—in some cases, as much as 25% faster. (source: www.stratoscale.com/blog/data-center/running-containers-on-bare-metal/).

The Benefits of Container Management Platforms 

When I talk about containers providing a service, I’m really talking about Kubernetes, the container management platform. Kubernetes not only supports a variety of cloud environments—OpenStack, AWS, Google, Azure, etc.—but understands which environment it’s in and automatically spins up the appropriate service, such as ELB (Elastic Load Balancer) for the AWS environment or Octavia if it’s an OpenStack environment. Kubernetes doesn’t distinguish between multi-tenant servers running virtual machines and bare-metal servers. It sees each VM or server, respectively, as a node in a cluster. So whether or not you virtualize your servers has no impact on your ability to run containers, although it does impact management and performance. Basically, if you’re running a virtualized environment, you have two tiers of orchestration instead of one: the VIM (Virtualization Infrastructure Manager) and Kubernetes.

But wait a minute, you may be thinking, I thought you needed a virtualized environment to run OpenStack? There’s the irony or, more to the point, Ironic. OpenStack Ironic is designed specifically for OpenStack to manage bare-metal servers. With it, you can segregate separate servers into a Kubernetes cluster just as you would group VMs into a cluster. What if you want to run containers on bare-metal servers without OpenStack? This can be done, too and is known as “Kubernetes bare metal”.  Load Balancing, in this case, can be provided by the Metal LB project.

If running a cloud environment on bare-metal servers feels like taking a step back to take a step forward, take heart: Chances are, you’ll want both virtualized and non-virtualized servers in your cloud environment. The future isn’t a one-size-fits-all proposition for service providers. There will be cloud services for residential customers that may have ultra-high utilization rates, in which case the performance benefits of a bare-metal server make more sense. For finely sliced enterprise services, however, a flexible multi-tenant model is more desirable. The common thread for both approaches is agility.  

Of course, there’s a lot more to this discussion than we could “contain” to a single blog, so feel free to reach out to us if you want to take a deeper dive into cloud architectures.

 

Why Service Mesh for Microservices Makes Sense

by Ron Parker Ron Parker No Comments

Containers, Kubernetes and microservices form the foundation of a cloud-native architecture, but they’re not the only considerations. In fact, as I write this, the Cloud-Native Computing Foundation (CNCF) is considering adding a fourth pillar to their cloud-native requirements: the service mesh.

What is a Service Mesh?

When it comes to understanding and managing microservices, the service mesh for microservices is critical. Microservices are very small and tend to move around a lot, making them difficult to observe and track. At the service mesh layer, network operators can finally and clearly see how microservices interact with one another (and with other applications), secure those interactions, and manage them based on customizable policies.

The Importance of a Service Mesh

The Service Mesh Provides Load Balancing for Microservices

One of the functions that a service mesh provides is load balancing for microservices. Recalling that microservices are instantiated in a dynamic fashion—that is, they can appear and disappear quickly—traditional network management tools aren’t granular enough to manage these microservice life cycle events. The service mesh, however, understands which microservices are active, which microservices are related (and how), and can provide policy enforcement at a granular level by deciding how workloads should be balanced. For example, if a microservice is upgraded, the service mesh decides which requests should be routed to the microservices running the stable version and which requests should be routed to the microservices running the upgraded version.   This policy can be modified multiple times during the upgrade process and serves as the basis for what the industry calls a “canary upgrade” approach.

The Service Mesh Improves Microservices Security

Another area where the service mesh plays a valuable role is in microservices security. It is considered best practice to use the same security guidelines for communications between microservices and for their communications with the “outside” world. This means authentication, authorization and encryption need to be enforced for all intra-microservice communications. The service mesh enforces these security measures without affecting application code, as well as enforce security-related policies such as whitelists/blacklists or rate limiting in the event of a denial-of-service (DoS) attack. But the service mesh doesn’t stop at security between microservices; it extends security measures to inbound/outbound communications that take place through the ingress and egress API gateways that connect microservices to other applications.

The Service Mesh Provides Visibility of Microservices

Finally, the service mesh provides much-needed visibility into the microservices themselves. There are several tools available today that help with this: Istio, which provides the control plane for microservices; Envoy, a microservices sidecar that acts as the communications proxy for the API gateway functions; and Kiali, which visualizes the service mesh architecture at a given point in time and displays information such as error rates between microservices. If you’re unfamiliar with the sidecar concept, you can think of it as an adjunct container attached to the “main” microservice container that provides a supporting service—in the case of Envoy, intercepting both inbound and outbound REST calls.

While CNCF will likely decide in favor of adding the service mesh to their cloud-native requirements, you can get those benefits today with Affirmed Networks. It’s just another example of our forward-thinking approach, since it makes a lot more sense to include those capabilities into our cloud-native architecture right from the beginning than to mesh around it with later.

 

Microservices Observability Brings Clarity to Cloud-Native Network

by Ron Parker Ron Parker No Comments

In the world of microservices, observability is an important concept. Yes, containers and automation are very important concepts too, but with observability, you can see what your microservices are doing, and you have the assurance that they’re performing and behaving correctly.

In the traditional telco world, this concept is known as service assurance. There are vendors who specialize in service assurance solutions that observe network traffic by pulling data straight from the fiber connections through physical taps. But how do you put a physical tap on a virtual machine? And how do you monitor cloud-native microservices when there may be thousands of them deployed on single VM at a given moment in time?

The answer, of course, is you can’t. What works in the physical world is, in the cloud-native world, virtually impossible. Instead, the broader cloud community has developed a robust ecosystem of observability tools to provide service assurance in cloud-native network. Some of these tools are relatively new, while others have been used by enterprises and cloud providers for years. When Affirmed built its 5G Core solution, we made an important decision to leverage the best microservices observability tools through our Platform as a Service (PaaS) layer, giving mobile network operators (MNOs) a simple, effective way to deliver service assurance.

Four Aspects of Observability in Cloud-Native Networks

The concept of observability in the cloud-native networking world is similar to FCAPS in the traditional telco world. Observability can be broken into four categories: application performance management; logging and events; faults; and tracing.

Application performance management

(APM) measures key performance indicators (KPIs) such as latency. The Cloud Native Computing Foundation (CNCF) recommends Prometheus as its KPI collection/storage tool. Prometheus is tightly integrated with Kubernetes and Helm; when you deploy a cloud-native Network Function (CNF), Prometheus is deployed with it, allowing it to scrape data from microservices in a non-intrusive and efficient manner. This data is then visualized through a tool called Grafana, which creates dashboards using widgets; Affirmed also integrates Grafana and includes pre-built dashboards and widgets into its 5GC solution.

Logging and events

This records what is actually happening to apps and microservices in the cloud-native environment. Cloud providers generally use what is known as the EFK stack—Elasticsearch, Fluentd and Kibana—to record logs and generate alerts. At a high level, Fluentd collects the logs and messages, Elasticsearch stores them and Kibana provides the data visualization, again using dashboards.

Faults

To detect faults, open-source offers two tools: ElastAlert (so named because it is designed to work with Elasticsearch) and Prometheus’ Alert Manager, which comes bundled with the Prometheus tool. ElastAlert lets you set specific rules and policies to manage alerts (e.g., when X happens, send Y alert via SMS), while Alert Manager generates alerts when specific numerical thresholds are passed.

Tracing

Tracing is something unique to the cloud-native world, allowing network operators to track the exchanges between different microservices. For example, microservice A might invoke microservice B to complete a process, which in turn invokes microservice C and ultimately returns control back to microservice A. As you might imagine, tracking the exchanges between every microservice instance would generate an almost-unmanageable amount of data, so tracking tools like Jaeger (which is supported by CNCF) only collect a very small sample (the default sample rate is a tenth of one percent) of these exchanges for analysis and visualization. But even this small sample is useful, as it provides visibility into how microservices are functioning.  And through policy, this global sampling percentage can be overridden for transactions containing certain data (e.g., for a particular 5GC ‘slice’ or from a particular user).

 

Beyond those four aspects of observability, there’s still more to see in a cloud-native architecture. For example, tools like Istio, Envoy and Kiali are used to illustrate the microservices topology (referred to in the industry as a service mesh) and show where errors are happening. (Service meshes are worthy of their own blog, so stay tuned for that.)

In addition to using open-source, third-party tools, a fully virtualized probing solution can record data logging and event data within the microservice itself. Think of a fully virtualized Probe as logging and events on steroids, as it captures detailed microservices data from every CNF and user endpoint: TCP roundtrip times and retransmission rates, deep packet inspection details, and the list goes on. Unlike traditional physical probes, vProbe doesn’t negatively impact network performance, and generates a wealth of data to support big data analyses, artificial intelligence and machine-learning systems.

If your 5G vendor isn’t as transparent in how their solutions support observability, maybe you should be looking for a new vendor.

Are You Ready for the Automatic, Non-Stop Future?

by Ron Parker Ron Parker No Comments

A few years back, the cloud community went through a kind of container craze. Everything you read seemed to be about containers and how they were going to revolutionize the software development industry. The thing is, containers are just, well, containers. They contain all the elements needed to run a particular process, which dramatically simplifies software by making it smaller and self-contained, but by themselves they’re just a small part of something bigger. That bigger thing, as I covered in a recent blog, is called a microservice.

How Dynamic Orchestration, Cloud Containers, and Microservices Work Together

Microservices will revolutionize software development as everything moves to the cloud. But here again, they won’t do it alone. If you’re familiar will the requirements of a cloud-native architecture, you’ll recall there are three key requirements of a cloud-native framework: containers, microservices and dynamic orchestration. This last requirement is absolutely critical, because it’s where the value of containers and microservices is realized. Small, agile and self-contained is very important, but only provided you can automate and perpetuate the software lifecycle.

If you think of the lifecycle of software, there are three basic stages: birth (deployment), midlife (updates, changes) and death (removal). In the current telco world, each stage requires a lot of effort, which tends to stifle innovation and adaptation. With dynamic orchestration, you can automate these stages so that new software versions can be instantly created, updated and removed based on pre-defined criteria including real-time traffic demands. You read that right: with dynamic orchestration, you can actually have a system that knows when to create or retire new instances of microservices and does so automatically because you’ve already told it how to behave under certain conditions.

Kubernetes & Helm: Key Tools for Dynamic Orchestration

The keys to dynamic orchestration consist of one relatively new software tool, Kubernetes, and one very new software tool, Helm. Together, Kubernetes and Helm allow software teams to run containers in a contextual group called a pod and apply declarative statements to manage the behavior of those pods under certain conditions. In the case of Affirmed’s virtualized Access and Mobility Management Function (AMF), for example, the AMF pod features three separate containers: the AMF call control container, the service mesh communication proxy container (what is referred to as a helper container) and Affirmed’s infrastructure container featuring multiple supplemental sidecars (a Kubernetes term) with supporting services. Kubernetes groups pod types like these together and sets rules or conditions regarding the AMF network function that automatically apply to all instances of it in the network.

The role of Helm is to provide Kubernetes with a system-level view of the network, giving Kubernetes a kind of built-in intelligence. Compared to a VNF manager telling OpenStack what to do (something called an imperative command), Helm shares intelligence about pod types and the rules/conditions that have been declared as ideal (e.g., use up to 80 percent of CPU for a particular pod type), so that Kubernetes can dynamically and automatically do the right thing under the right circumstances.

Kubernetes and Helm are a significant improvement in the cloud world. In fact, the chief mission of the Cloud Native Computing Foundation (CNCF) is to promote the adoption of Kubernetes for all cloud computing environments. Relative to OpenStack plus one (or more) VNF managers, Kubernetes + Helm is much more efficient in terms of lightness and agility. Where OpenStack is reactive—i.e., it only does what it’s told to do—Kubernetes is proactive. Told once, Kubernetes remembers what the ideal state is for a particular pod type and will automatically take corrective actions to achieve that state when possible.

The idea of dynamically automated network has been a holy grail for a long time. What Kubernetes + Helm does is invite telco software developers and their customers to the non-stop, automatic party of the 5G future. We hope we’ll see you there.

 

Cloud-Native Microservices: Dream A Little Dream

by Ron Parker Ron Parker No Comments

If you haven’t heard of the Cloud Native Computing Foundation (CNCF) yet, don’t worry, you will soon. Although they’ve only been around for a few years, the CNCF is already helping to shape the future by rallying developers and engineers around the importance of building cloud-native networks and applications. Key to their vision of the future is a little something known as a microservice. Now, you might not think something with such a small-sounding name could be a big deal, but you’d be mistaken. Cloud-native microservices are a very big deal if you plan on delivering applications in the future.

Microservices are one of the three building blocks that define a cloud-native approach, according to the CNCF. The other two blocks are containers and dynamic orchestration, and each of those is worth its own blog, which I hope to talk about in the near future. But I’m focusing on microservices first because I believe it has the potential to be the most impactful to our telecommunications industry.

Why Cloud-Native Microservices are Important

Before I get ahead of myself, let me backtrack by explaining what a microservice is and why it’s important. If you look at software development in the telco industry today, it’s a big and bulky affair. Software consumes a lot of processing and storage, even after virtualization, and evolves slowly over one or two major iterations per year. It’s expensive to create, test and update this software because it has so many unique parts: policy engine, security, database, etc. As a result, new telco applications are few and far between—the opposite of how most agile over-the-top competitors and cloud service providers work today.

Cloud-native microservices are essentially the moving parts that make up traditional software, unbundled and placed in containers for easier management. So instead of a single, massive software application consuming several virtual machines, you might have 30 individual microservices that are constantly updated—weekly or even daily—and run on one or two virtual CPUs for much better scalability and efficiency. More importantly, new applications can be composed from microservices in a matter of days or even hours, and torn down just as quickly without impacting the network.

Now is The Time for Telcos to Adopt Cloud-Native Microservices

If this system sounds familiar, it should—it’s the way that Amazon, Google and other visionary cloud providers have been building their applications for years. Today, most enterprises have adopted the microservices model as well, from mobile healthcare applications to online financial services. For telcos, the time is right to adopt microservices, particularly as they look ahead to the new revenue opportunities presented by 5G and technologies such as network slicing. The ability to slice traffic based on unique requirements (e.g., latency, security, scalability) will be critical as telco carriers partner with their enterprise customers to enable the next generation of 5G services. Network slicing presumes that carriers will also be able to provide tailored services across those slices, something that will require a microservices-based approach if telcos hope to compete with agile OTT/cloud companies for that opportunity.

In a very real sense, cloud-native microservices have the power to transform telcos into innovation factories. For years, telcos have been forced to dream big: if an idea couldn’t justify the huge investment in time and cost required to develop, test and launch it as a service, it was shelved. With microservices, even small dreams can come true, because the investment in time and cost is minimal, and the impact on existing network services is nominal. That fail-fast, succeed-faster model is what allows the Amazons and Googles of the world to take chances without taking risks.

It’s a world that telcos are about to discover for themselves. So go ahead, dream a little dream. It might just have a bigger impact than you ever imagined.