Home Archives for Ron Parker

Author: Ron Parker

The End of the ISV Era

by Ron Parker Ron Parker No Comments

It’s no secret that cloud services are on the rise. Gartner estimates that businesses will spend $266.4 billion on cloud services this year, growing to $354.6 billion by 2022. Why is cloud consumption rising so quickly? Because free-market economies hate inefficiencies, and the cloud is all about efficiency. But that’s not to suggest that cloud adoption didn’t create ripples in the industry, as we see when we look at its history and the impact on ISVs (Individual Software Vendors).

 

History of Cloud Services

In the early days of cloud, enterprises were primarily attracted to the cloud for data center outsourcing: servers, switches, storage, infrastructure-oriented software, and the expertise to manage it all. The premise was that the cloud provider could offer better economies of scale by hosting multiple customer data centers and streamlining their own operations through in-house expertise, particularly around automation. In exchange for a monthly fee, enterprises could eliminate the hardware, software and IT resources associated with maintaining their own data center environment.

Infrastructure Abstraction

Over time, enterprises (and their software suppliers) were able to focus more on the applications and less on the integration points of their legacy infrastructure—the value of infrastructure abstraction was profitably exploited. This soon led to enterprises embracing managed Software as a Service (SaaS) offerings for the supporting systems of their mission-critical applications.

Examples of this include observability-oriented systems (e.g., ElasticSearch, Logstash, Kibana, Prometheus, Grafana) and database systems (e.g., MongoDB, Cassandra). With the cloud provider now offering these systems as managed services, the enterprise no longer needed to worry about deploying and supporting these systems; they could just order them through the cloud provider’s portal.

As all this lower-layer abstraction was happening, however, the remaining applications and the business logic they contained grew more complex—so complex, in fact, that the traditional model of licensing application software to another organization for deployment and operation began to disappear. Modern software is consumed in one of two ways and operated only in one way. Operationally speaking, the organization that builds the software, operates it. Enterprises consume the software they write directly and consume anything else as a service via APIs over the Internet. It should be clear that the API service provider is indeed operating the software that they wrote.

Businesses Defined by Software

While this change was happening, an even more important transformation was taking place in the software industry. A growing number of businesses became defined by software, which created a greater need for improved agility. It was no longer acceptable to deliver only two or four upgrades per year; instead, businesses needed tens, hundreds and even thousands of software updates per day.

CI/CD (Continuous Integration/Continuous Deployment)

In 2010, responding to this need, Jez Humble and David Farley devised an extension to the existing concept of continuous integration, calling it Continuous Integration/Continuous Deployment (CI/CD). CI/CD proposed combining the previously separate functions of development and operations into a single function, DevOps. The DevOps team would be responsible for feature development, test automation and production operations. CI/CD maintained that only by breaking down internal barriers could an enterprise reach the point of executing 10 or more updates per day.

There was only one problem with CI/CD: existing software architectures were poorly suited to frequent releases because virtually all software was released monolithically. The software code may have been modularized, but the release needed to include all the functionality, fully tested. How fast could enterprises update a complex, monolithic application?

Microservices

As enterprises were struggling to answer this question, the idea of microservices—which had been floating around since 2007—began to take hold in architectural circles in 2011. Microservices maintained the idea that, by breaking larger applications into bite-size pieces that were developed and released independently, application development teams could release tens, hundreds and even thousands of software updates per day using a fully automated CI/CD pipeline.

This meant that no human intervention would be required between the time the developer committed the code and the time they ran the code in a production environment. Microservices—particularly stateless microservices—bring their own complexities, however: API version controls, DB schema controls, observability, and more. Fortunately, in CI/CD’s DevOps team model, all the necessary expertise is contained in a single group.

 

The Impact on ISVs

So how does all this impact ISVs? Remember, these are the entities that produce applications and license them for use by others. Whether the licensing is annual or perpetual, the main issue is that the purchaser is ultimately responsible for the deployment and operation of that software. ISVs often supplement their licensing with extensive professional services and training as a means to achieve the requisite knowledge transfer. But that knowledge transfer is never complete, and the ultimate experts on the vendor’s software remain in the vendor’s organization.

What’s the natural consequence of this? The customer consumes hosted or fully managed services. It is a change in the way telcos have done things in the past, and change is never easy, but the efficiency and agility benefits of moving to a modern, cloud-native model can have a profoundly positive impact on telcos going forward.

Think You’ve Got 5G Security Issues Protected? Think Again.

by Ron Parker Ron Parker No Comments

As interest in 5G continues to heat up, you’re likely to hear a lot more about 5G security. You may not, however, be hearing the whole story. Most conversations around 5G security centers on the standards put forward by 3GPP last year. Those standards are a good starting point, don’t get me wrong, but they’re not the last word on 5G security issues by a longshot. Why? Because they completely leave container security out of the conversation.

5G Security and Containers

There are a lot of new network elements to consider in a 5G architecture, but the biggest change in 5G is the fact that almost everything is now running on containerized software. In terms of 5g security threats, containers are prime targets for cybercriminals because they contain sensitive data such as passwords and private keys. Understanding how to protect containers from security threats is just as important as protecting the transport layers and gateways in a 5G network. Building on what 3GPP has proposed, we believe that 5G security protection has four main objectives, only two of which are currently addressed by the 3GPP’s recommendations.

A Four-point Approach to 5G Security

Let’s start with what 3GPP has already proposed for 5g security standards:

1. A trust model with two distinct, onion-layered approaches for roaming and non-roaming networks. In the non-roaming network, this model features an Access Management Function (AMF) and Unified Data Management (UDM) in the core, wrapped by the Authentication Server Function (AUSF). For roaming networks, 3GPP introduces the Security Protection Proxy (SEPP) for secure connectivity between the home and roaming networks, and the Network Exposure Function (NEF) to protect core services from inappropriate enterprise requests.

2. Encryption and authentication via Transport Layer Security (TLS), certificate management and OAuth2.

But what about security for the 5G services themselves? As the network shifts from hardware to software, telco operators need to have software security provisions in place to protect their data and their customers. At Affirmed, we see this as involving two distinct but complementary initiatives:

3. Secure software development. App developers need to ensure they’re writing secure code, validating it securely (i.e., using static code analysis), drawing from secure repositories and building everything on a secure base layer foundation (e.g., Fedora).

4. Secure containers. Containers represent attractive attack vectors for cybercriminals. 5G operators need to protect these containers by securing the orchestration engine (Kubernetes) with proper role-based access controls, guarding containers in use (through runtime container security) and managing access permissions between the containers via automated policy-driven networking and service mesh controls.

The need for container security isn’t unique to telcos, and that’s actually a good thing because they can now leverage existing security tools that have already been developed for other cloud-native applications. Unfortunately, a lot of telco vendors aren’t familiar with open-source tools like Aqua (for container security) and Falco (for orchestration engine security). Instead, these vendors leave software out of the security discussion, and that leaves telco operators with some big security holes to fill.

The Bottom Line on 5G Security

If telco operators expect to dominate the 5G landscape, they’ll need to stand on the shoulders of some pretty big cloud companies, particularly where containerization and security are concerned. 3GPP’s security recommendations are a good introduction to 5G security needs, but software security is half of the 5G story. If your vendor is telling you only about that part of the story, talk to Affirmed.

 

Using Containers Cloud Architecture without Virtualization: Isn’t it Ironic?

by Ron Parker Ron Parker No Comments

The typical network transformation journey would look something like this: Linux, VMs, Containers. But this blog is about the road less taken, and how service providers can pass virtualization by using containers and go directly to the cloud.

That’s kind of a revolutionary concept. After all, many in IT have been trained to view virtualization as a necessary evolutionary step. Everything is more efficient in a virtualized environment, we were told. And then containers came along. The new reality is that you don’t need virtual machines to run containers. In fact, there are many cases where virtualization actually hurts the performance of a containerized application. In this article, we discuss the advantages of using containers vs. virtual machines.

Comparing Virtualization vs. Container Management Platforms

How can virtualization be a bad thing? Well, virtualization is great if you need to move and share applications between different physical servers, but it comes at a cost: about 10% of a server’s CPU is dedicated to running the virtual OS. Containers, by contrast, invoke the services they need from their cloud service provider: the storage, load balancing, and auto-scaling services in particular. And that frees up space on the server, which results in much faster performance—in some cases, as much as 25% faster. (source: www.stratoscale.com/blog/data-center/running-containers-on-bare-metal/).

The Benefits of Container Management Platforms 

When I talk about the advantages of containers as a service, I’m really talking about Kubernetes, the container management platform. Kubernetes not only supports a variety of cloud environments—OpenStack, AWS, Google, Azure, etc.—but understands which environment it’s in and automatically spins up the appropriate service, such as ELB (Elastic Load Balancer) for the AWS environment or Octavia if it’s an OpenStack environment. Kubernetes doesn’t distinguish between multi-tenant servers running virtual machines and bare-metal servers. It sees each VM or server, respectively, as a node in a cluster. So whether or not you virtualize your servers has no impact on your ability to run containers, although it does impact management and performance. Basically, if you’re running a virtualized environment, you have two tiers of orchestration instead of one: the VIM (Virtualization Infrastructure Manager) and Kubernetes.

But wait a minute, you may be thinking, I thought you needed a virtualized environment to run OpenStack? There’s the irony or, more to the point, Ironic. OpenStack Ironic is designed specifically for OpenStack to manage bare-metal servers. With it, you can segregate separate servers into a Kubernetes cluster just as you would group VMs into a cluster. What if you want to run containers on bare-metal servers without OpenStack? This can be done, too and is known as “Kubernetes bare metal”.  Load Balancing, in this case, can be provided by the Metal LB project.

If running a cloud environment on bare-metal servers feels like taking a step back to take a step forward, take heart: Chances are, you’ll want both virtualized and non-virtualized servers in your cloud environment. The future isn’t a one-size-fits-all proposition for service providers. There will be cloud services for residential customers that may have ultra-high utilization rates, in which case the performance benefits of a bare-metal server make more sense. For finely sliced enterprise services, however, a flexible multi-tenant model is more desirable. The common thread for both approaches is agility.  

Of course, there’s a lot more to this discussion than we could “contain” to a single blog, so feel free to reach out to us if you want to take a deeper dive into cloud architectures.

 

Why Service Mesh for Microservices Makes Sense

by Ron Parker Ron Parker No Comments

Containers, Kubernetes, and microservices form the foundation of a cloud-native architecture, but they’re not the only considerations. In fact, as I write this, the Cloud-Native Computing Foundation (CNCF) is considering adding a fourth pillar to their cloud-native requirements: the service mesh. A service mesh architecture for microservices makes sense, and in this blog, we explain why.

What is a Service Mesh?

When it comes to understanding and managing microservices, the service mesh for microservices is critical. Microservices are very small and tend to move around a lot, making them difficult to observe and track. At the service mesh layer, network operators can finally and clearly see how microservices interact with one another (and with other applications), secure those interactions, and manage them based on customizable policies.

The Importance of a Service Mesh

The Service Mesh Provides Load Balancing for Microservices

One of the functions that a service mesh provides is load balancing for microservices. Recalling that microservices are instantiated in a dynamic fashion—that is, they can appear and disappear quickly—traditional network management tools aren’t granular enough to manage these microservice life cycle events. The service mesh, however, understands which microservices are active, which microservices are related (and how), and can provide policy enforcement at a granular level by deciding how workloads should be balanced. For example, if a microservice is upgraded, the service mesh decides which requests should be routed to the microservices running the stable version and which requests should be routed to the microservices running the upgraded version.   This policy can be modified multiple times during the upgrade process and serves as the basis for what the industry calls a “canary upgrade” approach.

The Service Mesh Improves Microservices Security

Another area where the service mesh plays a valuable role is in microservices security. It is considered best practice to use the same security guidelines for communications between microservices and for their communications with the “outside” world. This means authentication, authorization and encryption need to be enforced for all intra-microservice communications. The service mesh enforces these security measures without affecting application code, as well as enforce security-related policies such as whitelists/blacklists or rate-limiting in the event of a denial-of-service (DoS) attack. But the service mesh doesn’t stop at security between microservices; it extends security measures to inbound/outbound communications that take place through the ingress and egress API gateways that connect microservices to other applications.

The Service Mesh Provides Visibility of Microservices

Finally, the service mesh provides much-needed visibility into the microservices themselves. There are several tools available today that help with this: Istio, which provides the control plane for microservices; Envoy, a microservices sidecar that acts as the communications proxy for the API gateway functions; and Kiali, which visualizes the service mesh architecture at a given point in time and displays information such as error rates between microservices. If you’re unfamiliar with the sidecar concept, you can think of it as an adjunct container attached to the “main” microservice container that provides a supporting service—in the case of Envoy, intercepting both inbound and outbound REST calls.

While CNCF will likely decide in favor of adding the service mesh to their cloud-native requirements, you can get those benefits today with Affirmed Networks. It’s just another example of our forward-thinking approach since it makes a lot more sense to include those capabilities into our cloud-native architecture right from the beginning than to mesh around it with later.

 

Microservices Observability Brings Clarity to Cloud-Native Network

by Ron Parker Ron Parker No Comments

In the world of microservices, observability is an important concept. Yes, containers and automation are very important concepts too, but with microservices observability, you can see what your microservices are doing, and you have the assurance that they’re performing and behaving correctly.

In the traditional telco world, this concept is known as service assurance. There are vendors who specialize in service assurance solutions that observe network traffic by pulling data straight from the fiber connections through physical taps. But how do you put a physical tap on a virtual machine? And how do you monitor cloud-native microservices when there may be thousands of them deployed on single VM at a given moment in time?

The answer, of course, is you can’t. What works in the physical world is, in the cloud-native world, virtually impossible. Instead, the broader cloud community has developed a robust ecosystem of microservices observability tools to provide service assurance in a cloud-native network. Some of these tools are relatively new, while others have been used by enterprises and cloud providers for years. When Affirmed built its 5G Core solution, we made an important decision to leverage the best microservices observability tools through our Platform as a Service (PaaS) layer, giving mobile network operators (MNOs) a simple, effective way to deliver service assurance.

Four Aspects of Microservices Observability in Cloud-Native Networks

The concept of microservices observability in the cloud-native networking world is similar to FCAPS in the traditional telco world. Observability can be broken into four categories: application performance management; logging and events; faults; and tracing.

Application performance management

(APM) measures key performance indicators (KPIs) such as latency. The Cloud Native Computing Foundation (CNCF) recommends Prometheus as its KPI collection/storage tool. Prometheus is tightly integrated with Kubernetes and Helm; when you deploy a cloud-native Network Function (CNF), Prometheus is deployed with it, allowing it to scrape data from microservices in a non-intrusive and efficient manner. This data is then visualized through a tool called Grafana, which creates dashboards using widgets; Affirmed also integrates Grafana and includes pre-built dashboards and widgets into its 5GC solution.

Logging and events

This records what is actually happening to apps and microservices in the cloud-native environment. Cloud providers generally use what is known as the EFK stack—Elasticsearch, Fluentd and Kibana—to record logs and generate alerts. At a high level, Fluentd collects the logs and messages, Elasticsearch stores them and Kibana provides the data visualization, again using dashboards.

Faults

To detect faults, open-source offers two tools: ElastAlert (so named because it is designed to work with Elasticsearch) and Prometheus’ Alert Manager, which comes bundled with the Prometheus tool. ElastAlert lets you set specific rules and policies to manage alerts (e.g., when X happens, send Y alert via SMS), while Alert Manager generates alerts when specific numerical thresholds are passed.

Tracing

Tracing is something unique to the cloud-native world, allowing network operators to track the exchanges between different microservices. For example, microservice A might invoke microservice B to complete a process, which in turn invokes microservice C and ultimately returns control back to microservice A. As you might imagine, tracking the exchanges between every microservice instance would generate an almost-unmanageable amount of data, so tracking tools like Jaeger (which is supported by CNCF) only collect a very small sample (the default sample rate is a tenth of one percent) of these exchanges for analysis and visualization. But even this small sample is useful, as it provides visibility into how microservices are functioning.  And through policy, this global sampling percentage can be overridden for transactions containing certain data (e.g., for a particular 5GC ‘slice’ or from a particular user).

 

Beyond those four aspects of observability, there’s still more to see in a cloud-native architecture. For example, tools like Istio, Envoy and Kiali are used to illustrate the microservices topology (referred to in the industry as a service mesh) and show where errors are happening. (Service meshes are worthy of their own blog, so stay tuned for that.)

In addition to using open-source, third-party tools, a fully virtualized probing solution can record data logging and event data within the microservice itself. Think of a fully virtualized Probe as logging and events on steroids, as it captures detailed microservices data from every CNF and user endpoint: TCP roundtrip times and retransmission rates, deep packet inspection details, and the list goes on. Unlike traditional physical probes, vProbe doesn’t negatively impact network performance, and generates a wealth of data to support big data analyses, artificial intelligence and machine-learning systems.

If your 5G vendor isn’t as transparent in how their solutions support observability, maybe you should be looking for a new vendor.