Home Archives for Ron Parker

Author: Ron Parker

Going Native: Why Carriers Need to Embrace Cloud-Native Technologies Right Now

by Ron Parker Ron Parker No Comments

The cloud isn’t an if. It’s a when. And it will probably start like this: a few forward-thinking carriers will begin moving large portions of their network functions into the cloud and realize that the savings are almost shocking. Not the 2X magnitude CapEx savings we’ve seen from replacing physical servers with cloud-based servers, but a 10X magnitude CapEx+OpEx savings that will occur when carriers move network hardware, applications and a significant portion of operations into the cloud. And when that happens, the rush to the cloud will be deafening.

Right now, the migration to the cloud seems relatively restrained, almost quiet. 5G still feels far off in the future. (In reality, 5G’s arrival is imminent.) Meanwhile, carriers are looking to get more mileage out of their existing infrastructure, and replacing it with cloud servers isn’t a compelling narrative. The compulsion will come when early adopters start proving that the cloud is a game-changer. At that moment, carriers will need to move quickly or be left behind. If they don’t already have a cloud-ready network architecture in place, they can forget about coming in first, second or third place in the race to deliver 5G services. That may sound like a dire prediction, but it doesn’t have to be.

 

What Cloud Native Technologies Hold for Carriers

A cloud-native network architecture can be had today—without ripping and replacing current infrastructure and without waiting for 5G to realize a return on investment. We see this as a phased approach to 5G: start investing in cloud-native technologies capabilities, with examples like control and user plane separation (CUPS), containers, Kubernetes and cloud-native network functions (CNFs) today to run your existing network more efficiently, and seamlessly shift those capabilities into the cloud when you’re ready.

Benefits of Going Cloud-Native

There are several important benefits of using cloud-native technologies now.

  • It delivers network agility by allowing carriers to quickly create and turn up new services, particularly private networks that will serve an increasing number of enterprises.
  • It offers automation of operations, which can dramatically reduce costs and accelerate time to market.
  • It provides network flexibility as carriers can deploy the same cloud-native architecture on their private infrastructure to serve millions of existing subscribers, for example, while spinning up new enterprise services in the cloud to avoid impacting those subscriber services.

This hybrid approach, by the way, is how we expect most carriers will consume the cloud initially. It’s critical as carriers pursue more enterprise opportunities and use cases that they continue to deliver the same or better levels of service to their existing subscriber base—it is, after all, where the bulk of their revenues come from today. Focusing on enterprise services in the cloud reduces risk and allows carriers to easily spin up new network slices for each enterprise customer. This may have been less of an issue in the past when carriers were managing private LTE networks for a handful of large customers, but it becomes unmanageable in a traditional network architecture when you have thousands of enterprise customers.

As you would expect, of course, the benefits of a cloud-native architecture are most apparent in the cloud. That’s especially true when the cloud-native architecture and the cloud architecture are managed by the same company, as they are today with Affirmed Networks and Microsoft Azure. Whether you deploy our cloud-native architecture in your own private network or in the public cloud, you’re getting the same code—tested and hardened in both environments—for a solution that is fully prepared for CI/CD environments from day one. No other company today can say that.

And, rest assured, day one is coming sooner than you think.

The End of the ISV Era

by Ron Parker Ron Parker No Comments

It’s no secret that cloud services are on the rise. Gartner estimates that businesses will spend $266.4 billion on cloud services this year, growing to $354.6 billion by 2022. Why is cloud consumption rising so quickly? Because free-market economies hate inefficiencies, and the cloud is all about efficiency. But that’s not to suggest that cloud adoption didn’t create ripples in the industry, as we see when we look at its history and the impact on ISVs (Individual Software Vendors).

 

History of Cloud Services

In the early days of cloud, enterprises were primarily attracted to the cloud for data center outsourcing: servers, switches, storage, infrastructure-oriented software, and the expertise to manage it all. The premise was that the cloud provider could offer better economies of scale by hosting multiple customer data centers and streamlining their own operations through in-house expertise, particularly around automation. In exchange for a monthly fee, enterprises could eliminate the hardware, software and IT resources associated with maintaining their own data center environment.

Infrastructure Abstraction

Over time, enterprises (and their software suppliers) were able to focus more on the applications and less on the integration points of their legacy infrastructure—the value of infrastructure abstraction was profitably exploited. This soon led to enterprises embracing managed Software as a Service (SaaS) offerings for the supporting systems of their mission-critical applications.

Examples of this include observability-oriented systems (e.g., ElasticSearch, Logstash, Kibana, Prometheus, Grafana) and database systems (e.g., MongoDB, Cassandra). With the cloud provider now offering these systems as managed services, the enterprise no longer needed to worry about deploying and supporting these systems; they could just order them through the cloud provider’s portal.

As all this lower-layer abstraction was happening, however, the remaining applications and the business logic they contained grew more complex—so complex, in fact, that the traditional model of licensing application software to another organization for deployment and operation began to disappear. Modern software is consumed in one of two ways and operated only in one way. Operationally speaking, the organization that builds the software, operates it. Enterprises consume the software they write directly and consume anything else as a service via APIs over the Internet. It should be clear that the API service provider is indeed operating the software that they wrote.

Businesses Defined by Software

While this change was happening, an even more important transformation was taking place in the software industry. A growing number of businesses became defined by software, which created a greater need for improved agility. It was no longer acceptable to deliver only two or four upgrades per year; instead, businesses needed tens, hundreds and even thousands of software updates per day.

CI/CD (Continuous Integration/Continuous Deployment)

In 2010, responding to this need, Jez Humble and David Farley devised an extension to the existing concept of continuous integration, calling it Continuous Integration/Continuous Deployment (CI/CD). CI/CD proposed combining the previously separate functions of development and operations into a single function, DevOps. The DevOps team would be responsible for feature development, test automation and production operations. CI/CD maintained that only by breaking down internal barriers could an enterprise reach the point of executing 10 or more updates per day.

There was only one problem with CI/CD: existing software architectures were poorly suited to frequent releases because virtually all software was released monolithically. The software code may have been modularized, but the release needed to include all the functionality, fully tested. How fast could enterprises update a complex, monolithic application?

Microservices

As enterprises were struggling to answer this question, the idea of microservices—which had been floating around since 2007—began to take hold in architectural circles in 2011. Microservices maintained the idea that, by breaking larger applications into bite-size pieces that were developed and released independently, application development teams could release tens, hundreds and even thousands of software updates per day using a fully automated CI/CD pipeline.

This meant that no human intervention would be required between the time the developer committed the code and the time they ran the code in a production environment. Microservices—particularly stateless microservices—bring their own complexities, however: API version controls, DB schema controls, observability, and more. Fortunately, in CI/CD’s DevOps team model, all the necessary expertise is contained in a single group.

 

The Impact on ISVs

So how does all this impact ISVs? Remember, these are the entities that produce applications and license them for use by others. Whether the licensing is annual or perpetual, the main issue is that the purchaser is ultimately responsible for the deployment and operation of that software. ISVs often supplement their licensing with extensive professional services and training as a means to achieve the requisite knowledge transfer. But that knowledge transfer is never complete, and the ultimate experts on the vendor’s software remain in the vendor’s organization.

What’s the natural consequence of this? The customer consumes hosted or fully managed services. It is a change in the way telcos have done things in the past, and change is never easy, but the efficiency and agility benefits of moving to a modern, cloud-native model can have a profoundly positive impact on telcos going forward.

Think You’ve Got 5G Security Issues Protected? Think Again.

by Ron Parker Ron Parker No Comments

As interest in 5G continues to heat up, you’re likely to hear a lot more about 5G security. You may not, however, be hearing the whole story. Most conversations around 5G security centers on the standards put forward by 3GPP last year. Those standards are a good starting point, don’t get me wrong, but they’re not the last word on 5G security issues by a longshot. Why? Because they completely leave container security out of the conversation.

 

5G Security and Containers

There are a lot of new network elements to consider in a 5G architecture, but the biggest change in 5G is the fact that almost everything is now running on containerized software. In terms of 5G security threats, containers are prime targets for cybercriminals because they contain sensitive data such as passwords and private keys. Understanding how to protect containers from security threats is just as important as protecting the transport layers and gateways in a 5G network. Building on what 3GPP has proposed, we believe that protection from 5G security issues and threats has four main objectives, only two of which are currently addressed by the 3GPP’s recommendations.

 

A Four-point Approach to 5G Security

Let’s start with what 3GPP has already proposed for 5g security standards:

1. A trust model with two distinct, onion-layered approaches for roaming and non-roaming networks. In the non-roaming network, this model features an Access Management Function (AMF) and Unified Data Management (UDM) in the core, wrapped by the Authentication Server Function (AUSF). For roaming networks, 3GPP introduces the Security Protection Proxy (SEPP) for secure connectivity between the home and roaming networks, and the Network Exposure Function (NEF) to protect core services from inappropriate enterprise requests.

2. Encryption and authentication via Transport Layer Security (TLS), certificate management and OAuth2.

But what about security for the 5G services themselves? As the network shifts from hardware to software, telco operators need to have software security provisions in place to protect their data and their customers. At Affirmed, we see this as involving two distinct but complementary initiatives:

3. Secure software development. App developers need to ensure they’re writing secure code, validating it securely (i.e., using static code analysis), drawing from secure repositories and building everything on a secure base layer foundation (e.g., Fedora).

4. Secure containers. Containers represent attractive attack vectors for cybercriminals. 5G operators need to protect these containers by securing the orchestration engine (Kubernetes) with proper role-based access controls, guarding containers in use (through runtime container security) and managing access permissions between the containers via automated policy-driven networking and service mesh controls.

 

The need for container security isn’t unique to telcos, and that’s actually a good thing because they can now leverage existing security tools that have already been developed for other cloud-native applications. Unfortunately, a lot of telco vendors aren’t familiar with open-source tools like Aqua (for container security) and Falco (for orchestration engine security). Instead, these vendors leave software out of the security discussion, and that leaves telco operators with some big security holes to fill.

 

The Bottom Line on 5G Security

If telco operators expect to dominate the 5G landscape, they’ll need to stand on the shoulders of some pretty big cloud companies, particularly where containerization and security are concerned. 3GPP’s security recommendations are a good introduction to 5G security needs, but software security is half of the 5G story. If your vendor is telling you only about that part of the story, talk to Affirmed Networks.

 

Using Containers Cloud Architecture without Virtualization: Isn’t it Ironic?

by Ron Parker Ron Parker No Comments

The typical network transformation journey would look something like this: Linux, VMs, Containers. But this blog is about the road less taken, and how service providers can pass virtualization by using containers and go directly to the cloud.

That’s kind of a revolutionary concept. After all, many in IT have been trained to view virtualization as a necessary evolutionary step. Everything is more efficient in a virtualized environment, we were told. And then containers came along. The new reality is that you don’t need virtual machines to run containers. In fact, there are many cases where virtualization actually hurts the performance of a containerized application. In this article, we discuss the advantages of using containers vs. virtual machines.

Comparing Virtualization vs. Container Management Platforms

How can virtualization be a bad thing? Well, virtualization is great if you need to move and share applications between different physical servers, but it comes at a cost: about 10% of a server’s CPU is dedicated to running the virtual OS. Containers, by contrast, invoke the services they need from their cloud service provider: the storage, load balancing, and auto-scaling services in particular. And that frees up space on the server, which results in much faster performance—in some cases, as much as 25% faster. (source: www.stratoscale.com/blog/data-center/running-containers-on-bare-metal/).

The Benefits of Container Management Platforms 

When I talk about the advantages of containers as a service, I’m really talking about Kubernetes, the container management platform. Kubernetes not only supports a variety of cloud environments—OpenStack, AWS, Google, Azure, etc.—but understands which environment it’s in and automatically spins up the appropriate service, such as ELB (Elastic Load Balancer) for the AWS environment or Octavia if it’s an OpenStack environment. Kubernetes doesn’t distinguish between multi-tenant servers running virtual machines and bare-metal servers. It sees each VM or server, respectively, as a node in a cluster. So whether or not you virtualize your servers has no impact on your ability to run containers, although it does impact management and performance. Basically, if you’re running a virtualized environment, you have two tiers of orchestration instead of one: the VIM (Virtualization Infrastructure Manager) and Kubernetes.

But wait a minute, you may be thinking, I thought you needed a virtualized environment to run OpenStack? There’s the irony or, more to the point, Ironic. OpenStack Ironic is designed specifically for OpenStack to manage bare-metal servers. With it, you can segregate separate servers into a Kubernetes cluster just as you would group VMs into a cluster. What if you want to run containers on bare-metal servers without OpenStack? This can be done, too and is known as “Kubernetes bare metal”.  Load Balancing, in this case, can be provided by the Metal LB project.

If running a cloud environment on bare-metal servers feels like taking a step back to take a step forward, take heart: Chances are, you’ll want both virtualized and non-virtualized servers in your cloud environment. The future isn’t a one-size-fits-all proposition for service providers. There will be cloud services for residential customers that may have ultra-high utilization rates, in which case the performance benefits of a bare-metal server make more sense. For finely sliced enterprise services, however, a flexible multi-tenant model is more desirable. The common thread for both approaches is agility.  

Of course, there’s a lot more to this discussion than we could “contain” to a single blog, so feel free to reach out to us if you want to take a deeper dive into cloud architectures.

 

Why Service Mesh for Microservices Makes Sense

by Ron Parker Ron Parker No Comments

Containers, Kubernetes, and microservices form the foundation of a cloud-native architecture, but they’re not the only considerations. In fact, as I write this, the Cloud-Native Computing Foundation (CNCF) is considering adding a fourth pillar to their cloud-native requirements: the service mesh. A service mesh architecture for microservices makes sense, and in this blog, we explain why.

 

What is Service Mesh?

When it comes to understanding and managing microservices, the service mesh for microservices is critical. Microservices are very small and tend to move around a lot, making them difficult to observe and track. At the service mesh layer, network operators can finally and clearly see how microservices interact with one another (and with other applications), secure those interactions, and manage them based on customizable policies.

 

The Importance of Service Mesh Architecture

Provides Load Balancing for Microservices

One of the functions that a service mesh provides is load balancing for microservices. Recalling that microservices are instantiated in a dynamic fashion—that is, they can appear and disappear quickly—traditional network management tools aren’t granular enough to manage these microservice life cycle events. The service mesh, however, understands which microservices are active, which microservices are related (and how), and can provide policy enforcement at a granular level by deciding how workloads should be balanced. For example, if a microservice is upgraded, the service mesh decides which requests should be routed to the microservices running the stable version and which requests should be routed to the microservices running the upgraded version. This policy can be modified multiple times during the upgrade process and serves as the basis for what the industry calls a “canary upgrade” approach.

 

Improves Microservices Security

Another area where the service mesh plays a valuable role is in microservices security. It is considered best practice to use the same security guidelines for communications between microservices and for their communications with the “outside” world. This means authentication, authorization, and encryption need to be enforced for all intra-microservice communications. The service mesh enforces these security measures without affecting application code, as well as enforce security-related policies such as whitelists/blacklists or rate-limiting in the event of a denial-of-service (DoS) attack. But the service mesh doesn’t stop at security between microservices; it extends security measures to inbound/outbound communications that take place through the ingress and egress API gateways that connect microservices to other applications.

 

Provides Visibility of Microservices

Finally, the service mesh provides much-needed visibility into the microservices themselves. There are several tools available today that help with this: Istio, which provides the control plane for microservices; Envoy, a microservices sidecar that acts as the communications proxy for the API gateway functions; and Kiali, which visualizes the service mesh architecture at a given point in time and displays information such as error rates between microservices. If you’re unfamiliar with the sidecar concept, you can think of it as an adjunct container attached to the “main” microservice container that provides a supporting service—in the case of Envoy, intercepting both inbound and outbound REST calls.

 

While CNCF will likely decide in favor of adding the service mesh to their cloud-native requirements, you can get those benefits today with Affirmed Networks. It’s just another example of our forward-thinking approach since it makes a lot more sense to include those capabilities into our cloud-native architecture right from the beginning than to mesh around it with later.