Home cloud native

cloud native

Using Containers Cloud Architecture without Virtualization: Isn’t it Ironic?

by Ron Parker Ron Parker No Comments

The typical network transformation journey would look something like this: Linux, VMs, Containers. But this blog is about the road less taken, and how service providers can pass virtualization by using containers and go directly to the cloud.

That’s kind of a revolutionary concept. After all, many in IT have been trained to view virtualization as a necessary evolutionary step. Everything is more efficient in a virtualized environment, we were told. And then containers came along. The new reality is that you don’t need virtual machines to run containers. In fact, there are many cases where virtualization actually hurts the performance of a containerized application. In this article, we discuss the advantages of using containers vs. virtual machines.

Comparing Virtualization vs. Container Management Platforms

How can virtualization be a bad thing? Well, virtualization is great if you need to move and share applications between different physical servers, but it comes at a cost: about 10% of a server’s CPU is dedicated to running the virtual OS. Containers, by contrast, invoke the services they need from their cloud service provider: the storage, load balancing, and auto-scaling services in particular. And that frees up space on the server, which results in much faster performance—in some cases, as much as 25% faster. (source: www.stratoscale.com/blog/data-center/running-containers-on-bare-metal/).

The Benefits of Container Management Platforms 

When I talk about containers providing a service, I’m really talking about Kubernetes, the container management platform. Kubernetes not only supports a variety of cloud environments—OpenStack, AWS, Google, Azure, etc.—but understands which environment it’s in and automatically spins up the appropriate service, such as ELB (Elastic Load Balancer) for the AWS environment or Octavia if it’s an OpenStack environment. Kubernetes doesn’t distinguish between multi-tenant servers running virtual machines and bare-metal servers. It sees each VM or server, respectively, as a node in a cluster. So whether or not you virtualize your servers has no impact on your ability to run containers, although it does impact management and performance. Basically, if you’re running a virtualized environment, you have two tiers of orchestration instead of one: the VIM (Virtualization Infrastructure Manager) and Kubernetes.

But wait a minute, you may be thinking, I thought you needed a virtualized environment to run OpenStack? There’s the irony or, more to the point, Ironic. OpenStack Ironic is designed specifically for OpenStack to manage bare-metal servers. With it, you can segregate separate servers into a Kubernetes cluster just as you would group VMs into a cluster. What if you want to run containers on bare-metal servers without OpenStack? This can be done, too and is known as “Kubernetes bare metal”.  Load Balancing, in this case, can be provided by the Metal LB project.

If running a cloud environment on bare-metal servers feels like taking a step back to take a step forward, take heart: Chances are, you’ll want both virtualized and non-virtualized servers in your cloud environment. The future isn’t a one-size-fits-all proposition for service providers. There will be cloud services for residential customers that may have ultra-high utilization rates, in which case the performance benefits of a bare-metal server make more sense. For finely sliced enterprise services, however, a flexible multi-tenant model is more desirable. The common thread for both approaches is agility.  

Of course, there’s a lot more to this discussion than we could “contain” to a single blog, so feel free to reach out to us if you want to take a deeper dive into cloud architectures.

 

Microservices Observability Brings Clarity to Cloud-Native Network

by Ron Parker Ron Parker No Comments

In the world of microservices, observability is an important concept. Yes, containers and automation are very important concepts too, but with observability, you can see what your microservices are doing, and you have the assurance that they’re performing and behaving correctly.

In the traditional telco world, this concept is known as service assurance. There are vendors who specialize in service assurance solutions that observe network traffic by pulling data straight from the fiber connections through physical taps. But how do you put a physical tap on a virtual machine? And how do you monitor cloud-native microservices when there may be thousands of them deployed on single VM at a given moment in time?

The answer, of course, is you can’t. What works in the physical world is, in the cloud-native world, virtually impossible. Instead, the broader cloud community has developed a robust ecosystem of observability tools to provide service assurance in cloud-native network. Some of these tools are relatively new, while others have been used by enterprises and cloud providers for years. When Affirmed built its 5G Core solution, we made an important decision to leverage the best microservices observability tools through our Platform as a Service (PaaS) layer, giving mobile network operators (MNOs) a simple, effective way to deliver service assurance.

Four Aspects of Observability in Cloud-Native Networks

The concept of observability in the cloud-native networking world is similar to FCAPS in the traditional telco world. Observability can be broken into four categories: application performance management; logging and events; faults; and tracing.

Application performance management

(APM) measures key performance indicators (KPIs) such as latency. The Cloud Native Computing Foundation (CNCF) recommends Prometheus as its KPI collection/storage tool. Prometheus is tightly integrated with Kubernetes and Helm; when you deploy a cloud-native Network Function (CNF), Prometheus is deployed with it, allowing it to scrape data from microservices in a non-intrusive and efficient manner. This data is then visualized through a tool called Grafana, which creates dashboards using widgets; Affirmed also integrates Grafana and includes pre-built dashboards and widgets into its 5GC solution.

Logging and events

This records what is actually happening to apps and microservices in the cloud-native environment. Cloud providers generally use what is known as the EFK stack—Elasticsearch, Fluentd and Kibana—to record logs and generate alerts. At a high level, Fluentd collects the logs and messages, Elasticsearch stores them and Kibana provides the data visualization, again using dashboards.

Faults

To detect faults, open-source offers two tools: ElastAlert (so named because it is designed to work with Elasticsearch) and Prometheus’ Alert Manager, which comes bundled with the Prometheus tool. ElastAlert lets you set specific rules and policies to manage alerts (e.g., when X happens, send Y alert via SMS), while Alert Manager generates alerts when specific numerical thresholds are passed.

Tracing

Tracing is something unique to the cloud-native world, allowing network operators to track the exchanges between different microservices. For example, microservice A might invoke microservice B to complete a process, which in turn invokes microservice C and ultimately returns control back to microservice A. As you might imagine, tracking the exchanges between every microservice instance would generate an almost-unmanageable amount of data, so tracking tools like Jaeger (which is supported by CNCF) only collect a very small sample (the default sample rate is a tenth of one percent) of these exchanges for analysis and visualization. But even this small sample is useful, as it provides visibility into how microservices are functioning.  And through policy, this global sampling percentage can be overridden for transactions containing certain data (e.g., for a particular 5GC ‘slice’ or from a particular user).

 

Beyond those four aspects of observability, there’s still more to see in a cloud-native architecture. For example, tools like Istio, Envoy and Kiali are used to illustrate the microservices topology (referred to in the industry as a service mesh) and show where errors are happening. (Service meshes are worthy of their own blog, so stay tuned for that.)

In addition to using open-source, third-party tools, a fully virtualized probing solution can record data logging and event data within the microservice itself. Think of a fully virtualized Probe as logging and events on steroids, as it captures detailed microservices data from every CNF and user endpoint: TCP roundtrip times and retransmission rates, deep packet inspection details, and the list goes on. Unlike traditional physical probes, vProbe doesn’t negatively impact network performance, and generates a wealth of data to support big data analyses, artificial intelligence and machine-learning systems.

If your 5G vendor isn’t as transparent in how their solutions support observability, maybe you should be looking for a new vendor.

Cloud-Native Microservices: Dream A Little Dream

by Ron Parker Ron Parker No Comments

If you haven’t heard of the Cloud Native Computing Foundation (CNCF) yet, don’t worry, you will soon. Although they’ve only been around for a few years, the CNCF is already helping to shape the future by rallying developers and engineers around the importance of building cloud-native networks and applications. Key to their vision of the future is a little something known as a microservice. Now, you might not think something with such a small-sounding name could be a big deal, but you’d be mistaken. Cloud-native microservices are a very big deal if you plan on delivering applications in the future.

Microservices are one of the three building blocks that define a cloud-native approach, according to the CNCF. The other two blocks are containers and dynamic orchestration, and each of those is worth its own blog, which I hope to talk about in the near future. But I’m focusing on microservices first because I believe it has the potential to be the most impactful to our telecommunications industry.

Why Cloud-Native Microservices are Important

Before I get ahead of myself, let me backtrack by explaining what a microservice is and why it’s important. If you look at software development in the telco industry today, it’s a big and bulky affair. Software consumes a lot of processing and storage, even after virtualization, and evolves slowly over one or two major iterations per year. It’s expensive to create, test and update this software because it has so many unique parts: policy engine, security, database, etc. As a result, new telco applications are few and far between—the opposite of how most agile over-the-top competitors and cloud service providers work today.

Cloud-native microservices are essentially the moving parts that make up traditional software, unbundled and placed in containers for easier management. So instead of a single, massive software application consuming several virtual machines, you might have 30 individual microservices that are constantly updated—weekly or even daily—and run on one or two virtual CPUs for much better scalability and efficiency. More importantly, new applications can be composed from microservices in a matter of days or even hours, and torn down just as quickly without impacting the network.

Now is The Time for Telcos to Adopt Cloud-Native Microservices

If this system sounds familiar, it should—it’s the way that Amazon, Google and other visionary cloud providers have been building their applications for years. Today, most enterprises have adopted the microservices model as well, from mobile healthcare applications to online financial services. For telcos, the time is right to adopt microservices, particularly as they look ahead to the new revenue opportunities presented by 5G and technologies such as network slicing. The ability to slice traffic based on unique requirements (e.g., latency, security, scalability) will be critical as telco carriers partner with their enterprise customers to enable the next generation of 5G services. Network slicing presumes that carriers will also be able to provide tailored services across those slices, something that will require a microservices-based approach if telcos hope to compete with agile OTT/cloud companies for that opportunity.

In a very real sense, cloud-native microservices have the power to transform telcos into innovation factories. For years, telcos have been forced to dream big: if an idea couldn’t justify the huge investment in time and cost required to develop, test and launch it as a service, it was shelved. With microservices, even small dreams can come true, because the investment in time and cost is minimal, and the impact on existing network services is nominal. That fail-fast, succeed-faster model is what allows the Amazons and Googles of the world to take chances without taking risks.

It’s a world that telcos are about to discover for themselves. So go ahead, dream a little dream. It might just have a bigger impact than you ever imagined.

 

Why CSPs Should be “Open” to New Architectures

by Angela Whiteford Angela Whiteford No Comments

By now it’s clear that that Communications Service Providers (CSPs) are facing considerable competition from a new breed of market entrants.  To make matters worse, these new players are not playing by the same rules, or are held to the same standards, as those who have provided reliable services to all of us for decades.  With today’s field of play tilted in the favor of these new of these new players, what are the CSPs to do to remain competitive as 5G comes into play?

In short, the answer lies in being open to new approaches and architectures that are part of the way networks are becoming virtualized.

While the first wave of virtualization was focused on reducing costs and increasing performance, the second wave has taken us toward cloud-native architectures that can be universally deployed.  But we aren’t done yet.

We believe that in order to effectively take advantage of these new approaches, CSPs should keep in mind three main areas: Cloud Native, Open Source and Telco grade functionality.

When cloud-native architectures are coupled with open-source technology, the benefits are becoming even more profound. Specifically, cloud-native architecture that leverage open-source systems are allowing CSPs to move forward using standardized methods in this new era of building networks that are carrier grade.

When embarking on this new approach importance of being “Telco Grade” should not be overlooked.  Telecom networks have stringent requirements in the areas of interoperability, latency, service delivery and policy management that must be addressed when building a next-gen 5G mobile core.  To be successful, several things must be accounted for, including:

  • Support for Multi-Network Interfaces – to support container environments
  • Data Plane Acceleration – to provide the performance that will be required by 5G workloads
  • Integrated Network Probing – to provide analytics and real-time data without impacting performance
  • Integrated Workflows – to allow efficient creation of new service creation
  • Topology and Environmental Awareness – to handle the unique topologies (edge/core) of telecom networks

By deploying a cloud-native platform that leverages open source technology and delivers web-scale performance, speed and simplicity, CSPs can move confidently with their transformation, while keeping themselves open to all the possibilities and opportunities that 5G will provide.

To learn more about how Affirmed Networks can support CSPs in this area, read our recent paper, An Open Approach to Building 5G Networks.”

Affirmed Networks Joins Linux Foundation Networking Fund

by Angela Whiteford Angela Whiteford No Comments

Affirmed Networks is pleased to announce that we have joined the Linux Foundation Networking Fund, an organization focused on facilitating collaboration and operational excellence across the open networking projects, such as ONAP and OPNFV, that are important to our customers as they continue to transform their networks.

Founded in 2000, the Linux Foundation provides tools, training, and events to scale any open source project. The Linux Foundation has become a true force in the industry, attracting the top developers as part of an ecosystem focused on accelerating open technology development and industry adoption.

In our view, open source technologies are critical to driving innovation across the entire ecosystem.  We are honored to join the other leading companies that are already part of the Linux Foundation, as members, we look forward to contributing to the advancement of open source technologies to ensure the community remains at the forefront of innovation as the industry continues its transformation to cloud-based architectures that will support 5G.

The LFN community will come together on September 25-27, 2018 in Amsterdam for Open Networking Summit Europe, the industry’s premier open networking event, gathering enterprises, service providers and cloud providers across the ecosystem to share learnings, highlight innovation and discuss the future of Open Source Networking, including SDN, NFV, orchestration and the automation of cloud, edge, network, and IoT services.

More information on the Linux Foundation can be found here.