Home Archives for Ron Parker

Author: Ron Parker

Think You’ve Got 5G Security Issues Protected? Think Again.

by Ron Parker Ron Parker No Comments

As interest in 5G continues to heat up, you’re likely to hear a lot more about 5G security. You may not, however, be hearing the whole story. Most conversations around 5G security centers on the standards put forward by 3GPP last year. Those standards are a good starting point, don’t get me wrong, but they’re not the last word on 5G security issues by a longshot. Why? Because they completely leave container security out of the conversation.

5G Security and Containers

There are a lot of new network elements to consider in a 5G architecture, but the biggest change in 5G is the fact that almost everything is now running on containerized software. In terms of 5g security threats, containers are prime targets for cybercriminals because they contain sensitive data such as passwords and private keys. Understanding how to protect containers from security threats is just as important as protecting the transport layers and gateways in a 5G network. Building on what 3GPP has proposed, we believe that 5G security protection has four main objectives, only two of which are currently addressed by the 3GPP’s recommendations.

A Four-point Approach to 5G Security

Let’s start with what 3GPP has already proposed for 5g security standards:

1. A trust model with two distinct, onion-layered approaches for roaming and non-roaming networks. In the non-roaming network, this model features an Access Management Function (AMF) and Unified Data Management (UDM) in the core, wrapped by the Authentication Server Function (AUSF). For roaming networks, 3GPP introduces the Security Protection Proxy (SEPP) for secure connectivity between the home and roaming networks, and the Network Exposure Function (NEF) to protect core services from inappropriate enterprise requests.

2. Encryption and authentication via Transport Layer Security (TLS), certificate management and OAuth2.

But what about security for the 5G services themselves? As the network shifts from hardware to software, telco operators need to have software security provisions in place to protect their data and their customers. At Affirmed, we see this as involving two distinct but complementary initiatives:

3. Secure software development. App developers need to ensure they’re writing secure code, validating it securely (i.e., using static code analysis), drawing from secure repositories and building everything on a secure base layer foundation (e.g., Fedora).

4. Secure containers. Containers represent attractive attack vectors for cybercriminals. 5G operators need to protect these containers by securing the orchestration engine (Kubernetes) with proper role-based access controls, guarding containers in use (through runtime container security) and managing access permissions between the containers via automated policy-driven networking and service mesh controls.

The need for container security isn’t unique to telcos, and that’s actually a good thing because they can now leverage existing security tools that have already been developed for other cloud-native applications. Unfortunately, a lot of telco vendors aren’t familiar with open-source tools like Aqua (for container security) and Falco (for orchestration engine security). Instead, these vendors leave software out of the security discussion, and that leaves telco operators with some big security holes to fill.

The Bottom Line on 5G Security

If telco operators expect to dominate the 5G landscape, they’ll need to stand on the shoulders of some pretty big cloud companies, particularly where containerization and security are concerned. 3GPP’s security recommendations are a good introduction to 5G security needs, but software security is half of the 5G story. If your vendor is telling you only about that part of the story, talk to Affirmed.

 

Using Containers Cloud Architecture without Virtualization: Isn’t it Ironic?

by Ron Parker Ron Parker No Comments

The typical network transformation journey would look something like this: Linux, VMs, Containers. But this blog is about the road less taken, and how service providers can pass virtualization by using containers and go directly to the cloud.

That’s kind of a revolutionary concept. After all, many in IT have been trained to view virtualization as a necessary evolutionary step. Everything is more efficient in a virtualized environment, we were told. And then containers came along. The new reality is that you don’t need virtual machines to run containers. In fact, there are many cases where virtualization actually hurts the performance of a containerized application. In this article, we discuss the advantages of using containers vs. virtual machines.

Comparing Virtualization vs. Container Management Platforms

How can virtualization be a bad thing? Well, virtualization is great if you need to move and share applications between different physical servers, but it comes at a cost: about 10% of a server’s CPU is dedicated to running the virtual OS. Containers, by contrast, invoke the services they need from their cloud service provider: the storage, load balancing, and auto-scaling services in particular. And that frees up space on the server, which results in much faster performance—in some cases, as much as 25% faster. (source: www.stratoscale.com/blog/data-center/running-containers-on-bare-metal/).

The Benefits of Container Management Platforms 

When I talk about the advantages of containers as a service, I’m really talking about Kubernetes, the container management platform. Kubernetes not only supports a variety of cloud environments—OpenStack, AWS, Google, Azure, etc.—but understands which environment it’s in and automatically spins up the appropriate service, such as ELB (Elastic Load Balancer) for the AWS environment or Octavia if it’s an OpenStack environment. Kubernetes doesn’t distinguish between multi-tenant servers running virtual machines and bare-metal servers. It sees each VM or server, respectively, as a node in a cluster. So whether or not you virtualize your servers has no impact on your ability to run containers, although it does impact management and performance. Basically, if you’re running a virtualized environment, you have two tiers of orchestration instead of one: the VIM (Virtualization Infrastructure Manager) and Kubernetes.

But wait a minute, you may be thinking, I thought you needed a virtualized environment to run OpenStack? There’s the irony or, more to the point, Ironic. OpenStack Ironic is designed specifically for OpenStack to manage bare-metal servers. With it, you can segregate separate servers into a Kubernetes cluster just as you would group VMs into a cluster. What if you want to run containers on bare-metal servers without OpenStack? This can be done, too and is known as “Kubernetes bare metal”.  Load Balancing, in this case, can be provided by the Metal LB project.

If running a cloud environment on bare-metal servers feels like taking a step back to take a step forward, take heart: Chances are, you’ll want both virtualized and non-virtualized servers in your cloud environment. The future isn’t a one-size-fits-all proposition for service providers. There will be cloud services for residential customers that may have ultra-high utilization rates, in which case the performance benefits of a bare-metal server make more sense. For finely sliced enterprise services, however, a flexible multi-tenant model is more desirable. The common thread for both approaches is agility.  

Of course, there’s a lot more to this discussion than we could “contain” to a single blog, so feel free to reach out to us if you want to take a deeper dive into cloud architectures.

 

Why Service Mesh for Microservices Makes Sense

by Ron Parker Ron Parker No Comments

Containers, Kubernetes, and microservices form the foundation of a cloud-native architecture, but they’re not the only considerations. In fact, as I write this, the Cloud-Native Computing Foundation (CNCF) is considering adding a fourth pillar to their cloud-native requirements: the service mesh. A service mesh architecture for microservices makes sense, and in this blog, we explain why.

What is a Service Mesh?

When it comes to understanding and managing microservices, the service mesh for microservices is critical. Microservices are very small and tend to move around a lot, making them difficult to observe and track. At the service mesh layer, network operators can finally and clearly see how microservices interact with one another (and with other applications), secure those interactions, and manage them based on customizable policies.

The Importance of a Service Mesh

The Service Mesh Provides Load Balancing for Microservices

One of the functions that a service mesh provides is load balancing for microservices. Recalling that microservices are instantiated in a dynamic fashion—that is, they can appear and disappear quickly—traditional network management tools aren’t granular enough to manage these microservice life cycle events. The service mesh, however, understands which microservices are active, which microservices are related (and how), and can provide policy enforcement at a granular level by deciding how workloads should be balanced. For example, if a microservice is upgraded, the service mesh decides which requests should be routed to the microservices running the stable version and which requests should be routed to the microservices running the upgraded version.   This policy can be modified multiple times during the upgrade process and serves as the basis for what the industry calls a “canary upgrade” approach.

The Service Mesh Improves Microservices Security

Another area where the service mesh plays a valuable role is in microservices security. It is considered best practice to use the same security guidelines for communications between microservices and for their communications with the “outside” world. This means authentication, authorization and encryption need to be enforced for all intra-microservice communications. The service mesh enforces these security measures without affecting application code, as well as enforce security-related policies such as whitelists/blacklists or rate-limiting in the event of a denial-of-service (DoS) attack. But the service mesh doesn’t stop at security between microservices; it extends security measures to inbound/outbound communications that take place through the ingress and egress API gateways that connect microservices to other applications.

The Service Mesh Provides Visibility of Microservices

Finally, the service mesh provides much-needed visibility into the microservices themselves. There are several tools available today that help with this: Istio, which provides the control plane for microservices; Envoy, a microservices sidecar that acts as the communications proxy for the API gateway functions; and Kiali, which visualizes the service mesh architecture at a given point in time and displays information such as error rates between microservices. If you’re unfamiliar with the sidecar concept, you can think of it as an adjunct container attached to the “main” microservice container that provides a supporting service—in the case of Envoy, intercepting both inbound and outbound REST calls.

While CNCF will likely decide in favor of adding the service mesh to their cloud-native requirements, you can get those benefits today with Affirmed Networks. It’s just another example of our forward-thinking approach since it makes a lot more sense to include those capabilities into our cloud-native architecture right from the beginning than to mesh around it with later.

 

Microservices Observability Brings Clarity to Cloud-Native Network

by Ron Parker Ron Parker No Comments

In the world of microservices, observability is an important concept. Yes, containers and automation are very important concepts too, but with microservices observability, you can see what your microservices are doing, and you have the assurance that they’re performing and behaving correctly.

In the traditional telco world, this concept is known as service assurance. There are vendors who specialize in service assurance solutions that observe network traffic by pulling data straight from the fiber connections through physical taps. But how do you put a physical tap on a virtual machine? And how do you monitor cloud-native microservices when there may be thousands of them deployed on single VM at a given moment in time?

The answer, of course, is you can’t. What works in the physical world is, in the cloud-native world, virtually impossible. Instead, the broader cloud community has developed a robust ecosystem of microservices observability tools to provide service assurance in a cloud-native network. Some of these tools are relatively new, while others have been used by enterprises and cloud providers for years. When Affirmed built its 5G Core solution, we made an important decision to leverage the best microservices observability tools through our Platform as a Service (PaaS) layer, giving mobile network operators (MNOs) a simple, effective way to deliver service assurance.

Four Aspects of Microservices Observability in Cloud-Native Networks

The concept of microservices observability in the cloud-native networking world is similar to FCAPS in the traditional telco world. Observability can be broken into four categories: application performance management; logging and events; faults; and tracing.

Application performance management

(APM) measures key performance indicators (KPIs) such as latency. The Cloud Native Computing Foundation (CNCF) recommends Prometheus as its KPI collection/storage tool. Prometheus is tightly integrated with Kubernetes and Helm; when you deploy a cloud-native Network Function (CNF), Prometheus is deployed with it, allowing it to scrape data from microservices in a non-intrusive and efficient manner. This data is then visualized through a tool called Grafana, which creates dashboards using widgets; Affirmed also integrates Grafana and includes pre-built dashboards and widgets into its 5GC solution.

Logging and events

This records what is actually happening to apps and microservices in the cloud-native environment. Cloud providers generally use what is known as the EFK stack—Elasticsearch, Fluentd and Kibana—to record logs and generate alerts. At a high level, Fluentd collects the logs and messages, Elasticsearch stores them and Kibana provides the data visualization, again using dashboards.

Faults

To detect faults, open-source offers two tools: ElastAlert (so named because it is designed to work with Elasticsearch) and Prometheus’ Alert Manager, which comes bundled with the Prometheus tool. ElastAlert lets you set specific rules and policies to manage alerts (e.g., when X happens, send Y alert via SMS), while Alert Manager generates alerts when specific numerical thresholds are passed.

Tracing

Tracing is something unique to the cloud-native world, allowing network operators to track the exchanges between different microservices. For example, microservice A might invoke microservice B to complete a process, which in turn invokes microservice C and ultimately returns control back to microservice A. As you might imagine, tracking the exchanges between every microservice instance would generate an almost-unmanageable amount of data, so tracking tools like Jaeger (which is supported by CNCF) only collect a very small sample (the default sample rate is a tenth of one percent) of these exchanges for analysis and visualization. But even this small sample is useful, as it provides visibility into how microservices are functioning.  And through policy, this global sampling percentage can be overridden for transactions containing certain data (e.g., for a particular 5GC ‘slice’ or from a particular user).

 

Beyond those four aspects of observability, there’s still more to see in a cloud-native architecture. For example, tools like Istio, Envoy and Kiali are used to illustrate the microservices topology (referred to in the industry as a service mesh) and show where errors are happening. (Service meshes are worthy of their own blog, so stay tuned for that.)

In addition to using open-source, third-party tools, a fully virtualized probing solution can record data logging and event data within the microservice itself. Think of a fully virtualized Probe as logging and events on steroids, as it captures detailed microservices data from every CNF and user endpoint: TCP roundtrip times and retransmission rates, deep packet inspection details, and the list goes on. Unlike traditional physical probes, vProbe doesn’t negatively impact network performance, and generates a wealth of data to support big data analyses, artificial intelligence and machine-learning systems.

If your 5G vendor isn’t as transparent in how their solutions support observability, maybe you should be looking for a new vendor.

Are You Ready for the Automatic, Non-Stop Future?

by Ron Parker Ron Parker No Comments

A few years back, the cloud community went through a kind of container craze. Everything you read seemed to be about containers and how they were going to revolutionize the software development industry. The thing is, containers are just, well, containers. They contain all the elements needed to run a particular process, which dramatically simplifies software by making it smaller and self-contained, but by themselves, they’re just a small part of something bigger. That bigger thing is called a microservice.

How Dynamic Orchestration, Cloud Containers, and Microservices Work Together

Microservices will revolutionize software development as everything moves to the cloud. But here again, they won’t do it alone. If you’re familiar will the requirements of a cloud-native architecture, you’ll recall there are three key requirements of a cloud-native framework: containers, microservices, and dynamic orchestration. This last requirement is absolutely critical because it’s where the value of containers and microservices is realized. Small, agile and self-contained is very important, but only provided you can automate and perpetuate the software lifecycle.

If you think of the lifecycle of software, there are three basic stages: birth (deployment), midlife (updates, changes) and death (removal). In the current telco world, each stage requires a lot of effort, which tends to stifle innovation and adaptation. With dynamic orchestration, you can automate these stages so that new software versions can be instantly created, updated and removed based on pre-defined criteria including real-time traffic demands. You read that right: with dynamic orchestration, you can actually have a system that knows when to create or retire new instances of microservices and does so automatically because you’ve already told it how to behave under certain conditions.

 

Kubernetes & Helm: Key Tools for Dynamic Orchestration

The keys to dynamic orchestration consist of one relatively new software tool, Kubernetes, and one very new software tool, Helm. Together, Kubernetes and Helm allow software teams to run containers in a contextual group called a pod and apply declarative statements to manage the behavior of those pods under certain conditions. In the case of Affirmed’s virtualized Access and Mobility Management Function (AMF), for example, the AMF pod features three separate containers: the AMF call control container, the service mesh communication proxy container (what is referred to as a helper container) and Affirmed’s infrastructure container featuring multiple supplemental sidecars (a Kubernetes term) with supporting services. Kubernetes groups pod types like these together and sets rules or conditions regarding the AMF network function that automatically apply to all instances of it in the network.

The role of Helm is to provide Kubernetes with a system-level view of the network, giving Kubernetes a kind of built-in intelligence. Compared to a VNF manager telling OpenStack what to do (something called an imperative command), Helm shares intelligence about pod types and the rules/conditions that have been declared as ideal (e.g., use up to 80 percent of CPU for a particular pod type), so that Kubernetes can dynamically and automatically do the right thing under the right circumstances.

Kubernetes and Helm are a significant improvement in the cloud world. In fact, the chief mission of the Cloud Native Computing Foundation (CNCF) is to promote the adoption of Kubernetes for all cloud computing environments. Relative to OpenStack plus one (or more) VNF managers, Kubernetes + Helm is much more efficient in terms of lightness and agility. Where OpenStack is reactive—i.e., it only does what it’s told to do—Kubernetes is proactive. Told once, Kubernetes remembers what the ideal state is for a particular pod type and will automatically take corrective actions to achieve that state when possible.

The idea of a dynamically automated network has been a holy grail for a long time. What Kubernetes + Helm does is invite telco software developers and their customers to the non-stop, automatic party of the 5G future. We hope we’ll see you there.