Home 5G

5G

10 Tips for a Successful NFV Deployment

by Affirmed Affirmed No Comments

Over the past eight years, Affirmed Networks has helped leading service providers successfully transition to NFV based architectures and realize exceptional returns. Along the way, we’ve learned some valuable NFV deployment lessons on how providers can avoid underwhelming NFV results and realize the technology’s full transformative benefits.

 

Some telecommunications network equipment vendors think that Network Functions Virtualization (NFV) is a byproduct of 5G and that the one shouldn’t arrive before the other. Reality says otherwise; many communication service providers are deriving value from NFV initiatives right now, primarily in the form of CAPEX/OPEX savings and network agility. Yet many service providers, in our experience, still only tap into 30 to 40 percent of NFV’s true potential. 

In terms of NFV challenges, what is it that typically holds providers back from realizing NFV’s full potential? There is no single reason for preventing NFV’s full potential; rather, it’s likely a combination of missed opportunities and misunderstanding as to NFV’s architectural requirements. 

Affirmed has helped many service providers transition to NFV-based architectures and realize the great returns. We’ve learned some valuable lessons to pass along to  providers. Our tips can help service providers avoid the challenges of an NFV deployment and underwhelming NFV results and realize the technology’s full transformative benefits.

 

Our Tips for NFV Deployment Success

To help CSPs across the world ensure success for NFV deployments as they prepare for 5G, we have identified 10 tips to overcome challenges  for a successful NFV deployment in 2020. 

  1. Not all hardware is created equal
  2. The packet forwarding architecture and hypervisor need attention too
  3. Don’t oversubscribe the application
  4. NFV isn’t a simple plug-and-play solution
  5. Redundancy needs to be built into the application and not just the NFVI architecture
  6. Telecom applications require built-in load balancing
  7. VMs need to scale independently
  8. The packet forwarding architecture and hypervisor need attention too
  9. Ownership is important
  10. One EMS is better than two (or three)

 

#1 Not all hardware is created equal:

There is a belief that you can run virtualized telecom applications on any vendor’s server – however, this is only a half-truth. There is one hardware dependency that always needs to be considered: the hardware must have a network interface card (NIC) that supports the data plane development kit (DPDK) in order to function properly. 

In our experience, we’ve found it’s often better to bundle the virtual network function (VNF) with hardware providers that support this NIC requirement rather than deploy the VNF in a hardware-agnostic environment.

 

#2 The packet forwarding architecture and hypervisor need attention too

While choosing the appropriate hardware can aid in the performance of your virtualized network, the packet forwarding architecture requires attention as well. The main function of the evolved packet core (EPC) is to move a large number of packets through the data plane. This means you need very high performance in the data plane. 

Typically, packets travel through the vSwitch function within the hypervisor, which queues them for the virtual machines (VMs). The vSwitch function uses a great deal of computing power, which limits the performance that VMs can achieve. This creates a need for single-root input/output virtualization (SR-IOV) technology to get around this limitation. SR-IOV technology allows the packets to bypass the hypervisor layer and travel directly from the PCI on the server to the VMs, giving the VMs full use of all CPU power and significantly increasing performance.

While SR-IOV is not a requirement for an NFV deployment, its role and impact are sometimes misunderstood by service providers. If a provider requires very high throughput, then SR-IOV is necessary. Furthermore, applications are very sensitive to how the hypervisor is configured and the specific settings it uses. In order to reach maximum performance, service providers must also tune the hypervisor to meet the specific requirements of their application (e.g., tuning how the hypervisor schedules the CPUs, CPU pinning, etc.).

 

#3 Don’t oversubscribe the application

Another important NFV deployment tip is to never oversubscribe a virtual application or the application’s CPU. Even though the technology allows for oversubscription of the application, this ends up degrading the performance of the application and causes problems down the road.

 

#4 NFV isn’t a plug-and-play solution

While virtualization is often marketed as plug and play, the reality is that it requires some tuning in the ecosystem for telecom applications to run at maximum performance. For example, in one NFV deployment, a customer experienced a denial-of-service attack that featured a lot of “burstiness” in the traffic. 

The DPDK driver was indiscriminately dropping packets and causing packet loss because it didn’t have any concept of quality of service (QoS). This required modification of the driver to avoid latency and packet loss. While this may seem like a minor detail, it can have a major impact on performance.

 

#5 – Redundancy needs to be built into the application and not just the NFVI architecture

In the enterprise world, redundancy is a relatively simple matter of spinning up a new VM when one VM fails. This works well for stateless, transaction-based applications, but telecom applications are stateful. When you lose the state of the VM, you lose the service. Also, when a VM fails, the time it requires to spin up a new VM is far too long for telecommunications applications and extends the problem of service disruption – a challenge for many service providers. 

In order to provide stateful redundancy in a telecom environment, operators cannot rely only on NFVI redundancy; statefulness needs to be built directly into the virtual application itself or maintained in an externalized database. That’s the approach we took when building our virtualized EPC solution, and it is a very important lesson to remember when talking about NFV.

 

#6 – Telecom applications require built-in load balancing

One of the main benefits of a virtual environment is the ability to scale up or scale down your processing power as workload demands change. When decommissioning a VM, however, you lose the state of that VM. In an enterprise environment featuring stateless, transaction-based applications, this is not an issue—but it is an issue in a telecom environment where stateful applications are the norm. 

Telecom applications that support dynamic scaling need load balancing; this way, when new resources are available, the application can load-balance across the new resources to prevent dropping service during a call/session. We believe load balancing should be built into the application, as the application knows better how to use the resources than an external load balancer.

 

#7 – VMs need to scale independently

NFV vendors need to be thinking about scalability before they build their solutions, not after. Specifically, vendors need to ensure that their VNFs can scale independently across different dimensions. In a telecom application, the data plane, management plane and control (i.e., signaling) plane each need to be scaled independently to avoid paying for stranded capacity. 

In a blade-based architecture, the signaling, data, and management capacity are added in fixed ratios; as more signaling capacity is needed, more blades are added. The result is that service providers end up with more data capacity as well, whether they need it or not. In a virtualized architecture, independent scaling is supported, meaning providers can scale up signaling capacity without affecting the data or management dimensions. This is the reason we chose to decompose each plane when we built our vEPC.

Applications need to be designed in a flexible way, allowing the scaling of VMs based on the specific call model or application (e.g., IoT, enterprise, consumer) and the availability of resources. By doing this, service providers can right-size the capacity for the specific call model.

 

#8 – Ownership is important

Service providers have traditionally relied upon their vendors to provide all the layers of a solution. NFV architecture is different. There’s a hardware layer, a hypervisor layer, and an application layer to consider, with each vendor bringing their own perspective to the solution. Instead of one finger to point when things go wrong, service providers must now point several fingers. This creates a challenge for providers in managing NFV deployments, as there is no clear accountability. 

At Affirmed, we’ve encountered this problem by taking “ownership” of the NFV experience and ultimate responsibility for the way our vEPC solution behaves in the NFV infrastructure (NFVI) environment. Our customers appreciate having an experienced vendor as a lead implementor who can work with ecosystem partners to resolve any issues.

 

#9 – One EMS is better than two (or three)

Service providers are accustomed to a single element management system (EMS) that displays the state of the system (e.g., alarms, traps, etc.) across all solution layers. In an NFV architecture, however, there are separate element managers for each layer. Having an overarching EMS that extends visibility into all layers and manages them as a single pane of glass” is an important capability for any NFV architecture.

 

Take the time to learn from the leaders

Perhaps the most important lesson there is to be learned from the leaders in the NFV journey is not to wait. There are those vendors who will tell you that NFV isn’t ready for prime time. What they’re really saying is that their solutions aren’t ready yet. 

At Affirmed, we’re building virtualized solutions that give the leading operators of today the competitive advantage they need to remain the leaders of tomorrow. Our cloud-native, 5G core solution, UnityCloud, not only reduces CAPEX and OPEX but also provides the capabilities for new revenue-generating services including service automation and microservices creation.

Learn more about NFV deployments with this whitepaper we published.

Using Containers Cloud Architecture without Virtualization: Isn’t it Ironic?

by Ron Parker Ron Parker No Comments

The typical network transformation journey would look something like this: Linux, VMs, Containers. But this blog is about the road less taken, and how service providers can pass virtualization by using containers and go directly to the cloud.

That’s kind of a revolutionary concept. After all, many in IT have been trained to view virtualization as a necessary evolutionary step. Everything is more efficient in a virtualized environment, we were told. And then containers came along. The new reality is that you don’t need virtual machines to run containers. In fact, there are many cases where virtualization actually hurts the performance of a containerized application. In this article, we discuss the advantages of using containers vs. virtual machines.

Comparing Virtualization vs. Container Management Platforms

How can virtualization be a bad thing? Well, virtualization is great if you need to move and share applications between different physical servers, but it comes at a cost: about 10% of a server’s CPU is dedicated to running the virtual OS. Containers, by contrast, invoke the services they need from their cloud service provider: the storage, load balancing, and auto-scaling services in particular. And that frees up space on the server, which results in much faster performance—in some cases, as much as 25% faster. (source: www.stratoscale.com/blog/data-center/running-containers-on-bare-metal/).

The Benefits of Container Management Platforms 

When I talk about the advantages of containers as a service, I’m really talking about Kubernetes, the container management platform. Kubernetes not only supports a variety of cloud environments—OpenStack, AWS, Google, Azure, etc.—but understands which environment it’s in and automatically spins up the appropriate service, such as ELB (Elastic Load Balancer) for the AWS environment or Octavia if it’s an OpenStack environment. Kubernetes doesn’t distinguish between multi-tenant servers running virtual machines and bare-metal servers. It sees each VM or server, respectively, as a node in a cluster. So whether or not you virtualize your servers has no impact on your ability to run containers, although it does impact management and performance. Basically, if you’re running a virtualized environment, you have two tiers of orchestration instead of one: the VIM (Virtualization Infrastructure Manager) and Kubernetes.

But wait a minute, you may be thinking, I thought you needed a virtualized environment to run OpenStack? There’s the irony or, more to the point, Ironic. OpenStack Ironic is designed specifically for OpenStack to manage bare-metal servers. With it, you can segregate separate servers into a Kubernetes cluster just as you would group VMs into a cluster. What if you want to run containers on bare-metal servers without OpenStack? This can be done, too and is known as “Kubernetes bare metal”.  Load Balancing, in this case, can be provided by the Metal LB project.

If running a cloud environment on bare-metal servers feels like taking a step back to take a step forward, take heart: Chances are, you’ll want both virtualized and non-virtualized servers in your cloud environment. The future isn’t a one-size-fits-all proposition for service providers. There will be cloud services for residential customers that may have ultra-high utilization rates, in which case the performance benefits of a bare-metal server make more sense. For finely sliced enterprise services, however, a flexible multi-tenant model is more desirable. The common thread for both approaches is agility.  

Of course, there’s a lot more to this discussion than we could “contain” to a single blog, so feel free to reach out to us if you want to take a deeper dive into cloud architectures.

 

Unlock Your Innovation Using Open Source PaaS Technology

by Ashwin Moranganti Ashwin Moranganti No Comments

It’s time to take the future back from equipment vendors and return it to the operators and the innovators!

Coming from a vendor, that’s a bold statement, so let me explain. For too long, telco operators have leaned on their equipment vendors to provide their platform of the future. Understand this: If you’re looking to a single vendor to solve your problems, you’re looking to get locked into someone else’s future. Instead, telco operators need to stop leaning on a single source, and learn to unlean by using open-source PaaS (Platform as a Service) technology and best-of-breed solutions from many vendors.

At Affirmed, we call this disaggregation. Even in a network where hardware has been virtualized, there is still a lot of proprietary functionality found in so-called virtualized network functions (VNFs). Disaggregation proposes reducing these VNFs down to their application logic and delivering everything else through a common, shared open-source PaaS: lifecycle management, databases, service mesh, monitoring, logging, etc. Disaggregation dramatically reduces the cost and complexity of the network, and gives operators the agility they need to rapidly create and innovate in the coming 5G environment.

 

Disaggregation Will Help Telco Operators Moving Forward

The idea of Disaggregation is, of course, very different from how many telco operators have constructed their networks in the past. Historically, telco networks were built using a mix-and-match approach from various vendors. Each vendor’s solution had their own CLI, database, lifecycle management, redundancy scheme and so on. Operators, in turn, learned how to use all these different tools, and were (not surprisingly) afraid to add new vendors to the mix for fear that they wouldn’t “fit” into their existing architecture. This approach didn’t encourage innovation, but stifled it.

With server virtualization, telco operators began to see the value of simplification and unification. But the real value of virtualization occurs when operators move beyond hardware and virtualize their underlying services platform. For example, in order to orchestrate the VNFs, the operator builds a multivendor orchestration system, which then needs to communicate with vendors’ proprietary VNF managers. Instead, what if all vendors simply delivered components as containers and the operator used widely adopted Kubernetes to orchestrate the containers? The same goes for service mesh, monitoring, logging and other services. Instead of operating a network designed by dozens of different vendors, you have a simple, shared architecture that features a common design and elements.

Disaggregation is at the heart of Affirmed’s new 5GC mobile core platform. Designed around open-source technology, 5GC leverages a shared PaaS architecture that allows telco operators to virtualize their networks for much higher efficiencies, improved agility and rapid delivery of new and innovative services. We see it as the difference between being a leaner and a leader. We don’t want our customers to lean on us for their future. We want to lead them to the future, which we strongly believe is open-source, cloud-native technology from the world’s most innovative companies.

Vendors still have an important role to play in the future, provided their relationship with telco operators is re-imagined around the reality that change and churn aren’t the enemy, but the opportunity. By building a software architecture around microservices and a standard PaaS layer, telco operators and their vendors can strategically respond to change and churn with agility and success. If telco operators expect to stay one step ahead in the race to 5G revenue, opening their network to more innovation is the single most important thing they can do.

Affirmed is “must-see 5G” at MWC19

by Affirmed Affirmed No Comments

This year’s Mobile World Congress theme is ‘Intelligent Connectivity’. At the heart of intelligent connectivity is 5G technologies, which will be center stage at this year’s conference. Unlike previous years, however, the 5G “story” will be woven into a variety of digital transformation topics, such as immersive content and artificial intelligence. This reflects the fact that 5G isn’t simply another ‘G’ of efficiency and speed: it promises to affect nearly every aspect of the future, transforming the daily lives of mobile consumers and re-inventing the business strategies for many enterprises.

The industry has heard all this before, of course. The difference at MWC19 is that visitors can expect to see a lot more of what 5G can really do, particularly in the new wave of 5G-enabled mobile services. For mobile service providers, that’s a really big difference. With the massive revenue opportunities for 5G technologies in the enterprise space, 5G offers a tantalizing market for service providers not seen since the Internet boom and the rise of the dot-coms.

Affirmed solutions are at the center of some of the world’s leading 5G use cases, from IoT applications to next-gen 5G/Wi-Fi communications. In fact, as you move through the conference, you’re likely to see Affirmed in action wherever you see mobile service providers embracing the future of 5G services.  This includes Affirmed’s 5G Webscale mobile core solution that includes capabilities such as network slicing, real-time analytics, closed loop automation and support for 5G NR.

We’ll be showcasing some exciting 5G applications at MWC19 in our own booth (Hall 2, booth #2D50), so be sure to stop by, say “hi” and check out some of the great demos we’ll have on tap, including:

  • Closed Loop Services Automation & Dynamic Network Slicing
  • Affirmed Mobile Core 5GC
  • Affirmed Edge Solution with Augmented Reality
  • 5G EPC
  • Affirmed Mobile Core as a Service

We’re also proud to have our joint solutions appear at many of our partners’ booths at MWC19, so keep your eyes open for Affirmed at the displays for Ciena, Dell, Intel, Juniper, RedHat, Tech Mahindra and VMware.

So, if you’d like to see more of Affirmed up close, schedule a meeting with us in Barcelona—we’ll be there every day from February 25th through the 28th.

The Importance of Performance in a 5G World

by Affirmed Affirmed No Comments

The industry is racing to the next generation – 5G – which will be virtualized, replacing the costly, legacy, hardware-based infrastructure that has been in place for decades with common off the shelf (COTS) servers that offer a host of cost and operational advantages. But did you know they also offer network performance benefits as well?

5G Network Performance

As network operators transform existing 4G networks the need to continually improve performance and scalability has become a key business imperative to drive the cost per bit delivered.

With the arrival of 5G, this need becomes even more pronounced in order to profitably serve the bandwidth demands of 5G applications and services. This will require significantly more data-processing horsepower from the network than we have ever seen to support the exponential rise in high-bandwidth data traffic (e.g., video), the creation of data-rich services and the introduction of billions of “talking” machines via the Internet of Things (IoT). 5g network performance will be required across all areas of the network, especially the mobile core.

And, these network performance gains are not a one-time event.

We are fortunate to be part of an industry that never stands still, and that continues to innovate and improve. In no area is this more evident than in the continued advances in processing technologies from leaders like Intel.

By combining these advances with innovative, market-proven NFV solutions for the mobile core, we have achieved performance gains of as much as 10X over purpose-built mobile core offerings and competitive virtualized offerings.

For those of us at Affirmed Networks seeing this type of network performance confirms what we’ve known all along: virtualized network functions (VNFs) that leverage the latest server technology and are architected for high data plane throughput are the best path to building the high-speed, high-capacity networks that 5G will require.

Learn more about the performance advantages of vEPC over legacy hardware in this Intel Performance Report.