Home Default


A More Power-full Approach to Network Planning

by Tim Irwin Tim Irwin No Comments

Like most people, I was horrified by the Texas power grid’s recent failure and the devastating impact on many citizens’ lives. I think many people wondered how something like this could happen. As someone in the telecommunications industry,  it occurred to me that the power and telecommunications sector face similar planning issues.

You might not think that power grids and telecommunications networks have much in common on the surface, but they have more in common than you might initially think. Both power and communications are fundamentally infrastructures that we are increasingly reliant upon as a society. Both experience fairly predictable demand behaviors, whether it’s everyone turning on their air conditioning during a sweltering summer day or everyone calling home on Mother’s Day. These events are predictable because history tells us that the past usage trends are reasonable predictors for future consumption behaviors. 

Both types of infrastructure are also subject to anomalous outages from technological or natural forces. We tend to know that these atypical events will eventually happen because probabilities also tell us these events are likely to occur. Still, the exact size, scope, and scale are often difficult to predict with precision. Infrastructure planners for both services spend much time worrying about these kinds of reliability, redundancy, and overall capacity issues. 

Another area of similarity is the cost of redundancy. Spare capacity cannot be instantaneously created out of thin air, and this extra capacity incurs a cost in any infrastructure. Unfortunately, there is a tendency to question additional charges on “sunny days” when needed. It is not until we encounter the “rainy day” scenario that we value the rationale behind it.

Planning for Failure

Returning to the realm of telecommunications and network planning, let’s look at how an operator typically plans for their future capacity needs. Most operators rely on premises-based servers (both virtualized and non-virtualized) to handle their infrastructure workloads. Let’s look at an elementary network capacity planning model.

As a rule of thumb, a single server should never run at higher than 80 percent of its full capacity. To understand why this is, think about your desktop or laptop. As your computer’s memory and processing capacity get closer to 100 percent, performance drop exponentially.

So, you might think that an operator only needs to purchase an extra 20 percent of capacity more than they need. Not so fast. In a typical active/standby failover, the redundancy model requires that, for every primary server, there is some reserve capacity that can take on the workload in the event of failure. In the illustration below, I show the example of a single pair of redundant servers, Server A and Server B. Each server runs at 40 percent full-time during regular operation because, if Server A should fail, Server B can assume Server A’s workload and still stay under the 80 percent threshold.

The need for redundancy complicates network capacity planning. For example, if an operator plans ten percent month-over-month growth, they need to double their capacity year-over-year. Unfortunately, most operators can’t merely add extra capacity a month in advance because of the time required to purchase and install new hardware. There are multiple steps involved—budget approval, vendor quoting, supply chain processes, shipping times, installation and cabling, etc.—that can take anywhere from six to twelve months to complete. In other words, operators realistically need to begin this process as much as a year in advance. 

Generally speaking, most planners project their capacity from the end of the current fiscal year to the end of the next fiscal year and order the necessary amount of capacity at the beginning of the fiscal year. Pre-planning in this way avoids being caught “short-served” at the end of the year, but at a cost: for most of the year, operators end up sitting on idle capacity, particularly in the first six to nine months of the year.

Advantages of Capacity Planning and the Cloud

But what if operators could expand their network capacity using the cloud? Spinning up workloads in the cloud takes a fraction of the time, meaning that telcos could add capacity as they needed it, not a year before they needed it. This approach to network capacity management saves the operator from paying for unused server capacity, along with the associated power, real estate, and remaining operational costs required to maintain those unused servers.

There are several other advantages to moving operator infrastructure into the cloud. For example, consider that operator networks face not only seasonal spikes in usage but also daily spikes. The chart below illustrates a typical day in the life of a network. Notice there can be a significant difference in the resource requirements throughout the day.

Only Pay for Resources You Use

Even with virtualized servers, operators still need to plan for enough physical infrastructure capacity to cover the peak busy hour. In genuinely cloud-native software architecture on the cloud, resources can be dynamically allocated during the busiest times throughout the day and deallocated when they are no longer needed. With cloud-based infrastructure, operators need only pay for the resources they use, which means operators are not charged for unused capacity. And because many workloads among many customers are statistically multiplexed on the cloud, operators can dynamically spin up additional resources as required.

Visibility of Network Cost

Cloud-based infrastructure also provides better visibility into actual network costs. Operator costs are analogous to an iceberg because only the physical costs (i.e., the hardware) are often visible. The real costs of running that infrastructure—operations, power, real estate, etc.—remain partially obscured. However, in the cloud, the combined CapEx and OpEx costs are visible as a total monthly cost.

New Opportunities for Service Expansion

Finally, the cloud opens up new opportunities for service expansion by eliminating most of the network’s upfront costs. Today, new service rollouts require much planning and a compelling business case because of the substantial sunk costs involved. By using cloud infrastructure, operators can dramatically reduce those costs and “dip their toes” into new services and new markets without having to make large investments while retaining the ability to scale up network capacity quickly if those services take off.

Cloud, and Especially Cloud-Native, is the Future

What kind of cost savings can be achieved? Early models show that operators can 30% or more by moving from an on-prem infrastructure model to the cloud. And that total-cost-of-ownership (TCO) gets even more attractive when using a cloud-native versus traditional virtualized cloud infrastructure. Our research shows that a cloud-native infrastructure can reduce TCO by another 25 percent.

As attractive as the cloud is, most operators aren’t ready yet to move their entire network infrastructure into the cloud, which is understandable. A hybrid model that mixes on-prem infrastructure with cloud-based infrastructure allows operators to expand network capacity in the cloud on-demand, a technique known as cloudbursting. By doing this, operators can make sure they always have enough capacity to handle whatever nature or the future throws at them.

Hyperscale Cloud and Mobile Core: Why They’re Better Together

by Ron Parker Ron Parker No Comments

What happens when you put a mobile core in a hyperscale cloud? Awesomeness.

For years, even before the cloud, there was software-as-a-service. Then followed a sort of “service mania” as vendors offered infrastructure-as-a-service, network-as-a-service, storage-as-a-service, ad nauseum. In the telco world, however, networks were still built primarily with dedicated boxes running proprietary software. It wasn’t cheap or easy to scale these networks, but they were reliable. This article outlines mobile-core-as-a-service solutions and the advantages of a fully integrated hyperscale cloud and mobile core.

Cloudification & Mobile Core

Today, many elements of the telco network have been virtualized and even cloudified. The result has been cheaper, more scalable, yet still reliable networks. One area that resisted this sweeping cloudification was the mobile core. Though virtualized, the mobile core remained very much an on-prem solution. That is, until Affirmed announced UnityCloud, the world’s first 5G mobile core that can be fully deployed in the cloud as a mobile-core-as-a-service, and integrates with a hyperscale cloud platform.

Benefits: Mobile Core on a Hyperscale Cloud

Running a mobile core in a hyperscale cloud has a number of benefits:

  • The deployment can be fully automated to increase service velocity and accelerate the time to revenue for new 5G services
  • Operators can orchestrate cloud workloads and private workloads using Kubernetes to compose new services in any configuration
  • Network functions can be scaled up or down automatically based on network demand
  • Operators can automate their continuous integration/delivery (CI/CD) pipeline through automated software upgrades
  • Network fault detection can also be automated and enhanced through AI and machine learning tools

Managing your mobile core with ARM and ARC

UnityCloud can run on the Microsoft Azure cloud platform as well as private cloud environments or on premise-based equipment. Within UnityCloud is a complete set of cloud-native functions (CNFs) built on a stateless microservices architecture. These CNFs provide both the control and user plane functions and can be separated in different environments; for example, with the control plane functions hosted in the Azure cloud and the user plane functions hosted on premise-based, bare metal servers.

The UnityCloud services reside in the platform-as-a-service layer, where they perform service assurance, CNF lifecycle management, security, and edge functions. One of the great features of deploying UnityCloud on Azure is the Azure Resource Manager (ARM), which serves as a GUI portal and an API layer. ARM lets you easily manage everything in the Azure environment and create templates to automate and orchestrate services.

Automation and unified management are critical to operating a 5G mobile core, but what happens when elements of the core are split between Azure and non-Azure environments?

With Azure Resource Center (ARC), you can manage non-Azure infrastructure from the same GUI portal. So, we’re not just allowing operators to deploy their mobile core any way and anywhere they want, but we’re doing it in a way that doesn’t add any complexity to the management of that mobile core.

Real-world use cases for mobile-core-as-a-service

UnityCloud is already helping some of the world’s most sophisticated mobile operators deploy 5G networks. For example, in Finland, a leading operator is using UnityCloud to deploy both 5G smartphone service and fixed broadband wireless using a mix of 4G and 5G radio access networks. In Latin America, a tier-one operator with 50 million subscribers is deploying its network services closer to the edge with UnityCloud, providing a better customer experience to subscribers across a widely dispersed geographic area. And, in the UK, a tier-one operator has dramatically reduced its network complexity with UnityCloud.

While mobile core efficiencies are a big part of UnityCloud’s story, content optimization is also important. UnityCloud includes a host of value-added content optimization services including TCP optimization, video optimization, firewall, carrier-grade NAT, and more. Consolidating these services, which were typically purchased from different vendors, into a single-vendor solution further simplifies the 5G network.

We expect that other mobile-core-as-a-service solutions will follow from other vendors, but even so, UnityCloud will have a unique advantage: full integration with a hyperscale cloud platform, Microsoft Azure. While accelerated deployment is one obvious advantage of this, UnityCloud can also now take advantage of all the features and benefits of the Azure cloud ecosystem including AI and machine learning. In fact, you could say UnityCloud has taken the concept of “cloud native” to a whole new level.

The Five Key Traits of Highly Successful 5G Networks

by Ron Parker Ron Parker No Comments

The new year gives each of us an opportunity to reflect on self-improvements for the future, and maybe networks are no different. Right now, your network could be telling itself that 2021 is the year it’ll finally get serious about IoT or stop talking about cloud-native and take the plunge. In which case, your network has its work cut out for it. For operators looking to get their networks in shape, this blog outlines key elements for successful 5G networks.


5G Requirements

Getting your network in shape for the 5G applications of the future isn’t a simple matter of reducing operational fat and running more hardware. It’s a completely different approach that requires unlearning some unhealthy habits, such as:

  • Gaining too much weight (in the form of new hardware) every time the network needs to expand
  • Avoiding network automation because it’s too expensive, too exotic, or too scary
  • Limiting major software releases once a year, while the competition is continuously innovating and improving
  • Accepting downtime during maintenance windows as a necessary evil
  • Piecing network visibility together from different tools that you know will never work together perfectly

In fairness, those habits were ingrained over years of operating a 2G/3G/4G network. But a 5G network doesn’t need telecom operators so much as telecom innovators, and innovation means embracing change. In order to support 5G innovation, telcos must learn to match the agility of over-the-top (OTT) providers, eliminate downtime, automate as much of their operations as possible and leverage both the cloud and edge computing to ensure they deliver amazing experiences to their users.


Five Key Elements of a 5G Network

At the heart of the 5G service experience is the 5G mobile core. There are a lot of different technology components that go into making a great 5G network, from virtualized RAN to container orchestration (Kubernetes), but there are five key elements that every successful 5G mobile core requires.

App Store Simplicity

“Plug” and “play” probably aren’t the first two words that come to mind when you think of a mobile network’s service architecture. Plug-and-play simplicity, however, is exactly what telco operators need to rapidly deploy and manage 5G services. Think of it as an internal app store, with portals and APIs that allow you to drag and click your way to creating new services.

Containerized Workloads

Virtualization was a great step forward. Now telcos need to take the next step, toward containerization. Containerized workloads provide the freedom to create services independent of hardware and software so they can run anywhere.

Network Slicing

We’ve been singing the praises of network slices for years, but 5G is where slicing really shines. That’s because 5G can serve so many different services to so many different businesses and consumers, which calls for the kind of network service differentiation that network slicing delivers.

Location Independence

In the past, the user and control planes sat on the same server/appliance. If you needed more of one, you got more of the other—even if you didn’t need it—because you couldn’t separate the two. Now, with control and user plane separation (CUPS), you can keep the user and control planes independent and finally scale network resources efficiently. CUPS opens up a range of deployment possibilities to improve 5G service delivery and reduce costs: local breakout at the edge, hybrid clouds, public cloud vs. on-prem edge, etc.

Access Independence

Wi-Fi and wireline technologies still have a role to play in 5G communications, which means they need to be able to access the 5G core (and vice versa). An effective 5G mobile core is one that allows telcos to manage and apply common policies to non-3GPP traffic such as Wi-Fi, cable/DSL, and fiber.


In Closing:

As you can see, 5G involves quite a “core” workout. Fortunately, there is an easier way to get your core in shape quickly: 5G mobile core as a service. It’s a new offering from Microsoft that’s based on Affirmed’s industry-leading 5G core technology and hosted in Microsoft’s new Azure for Operators environment. If that sounds like something in your future, tune in for my next blog on what “5G mobile as a service” means and why it’s a game-changer for 5G operators.

Going Native: Why Carriers Need to Embrace Cloud-Native Technologies Right Now

by Ron Parker Ron Parker No Comments

The cloud isn’t an if. It’s a when. And it will probably start like this: a few forward-thinking carriers will begin moving large portions of their network functions into the cloud and realize that the savings are almost shocking. Not the 2X magnitude CapEx savings we’ve seen from replacing physical servers with cloud-based servers, but a 10X magnitude CapEx+OpEx savings that will occur when carriers move network hardware, applications and a significant portion of operations into the cloud. And when that happens, the rush to the cloud will be deafening.

Right now, the migration to the cloud seems relatively restrained, almost quiet. 5G still feels far off in the future. (In reality, 5G’s arrival is imminent.) Meanwhile, carriers are looking to get more mileage out of their existing infrastructure, and replacing it with cloud servers isn’t a compelling narrative. The compulsion will come when early adopters start proving that the cloud is a game-changer. At that moment, carriers will need to move quickly or be left behind. If they don’t already have a cloud-ready network architecture in place, they can forget about coming in first, second or third place in the race to deliver 5G services. That may sound like a dire prediction, but it doesn’t have to be.


What Cloud Native Technologies Hold for Carriers

A cloud-native network architecture can be had today—without ripping and replacing current infrastructure and without waiting for 5G to realize a return on investment. We see this as a phased approach to 5G: start investing in cloud-native technologies capabilities, with examples like control and user plane separation (CUPS), containers, Kubernetes and cloud-native network functions (CNFs) today to run your existing network more efficiently, and seamlessly shift those capabilities into the cloud when you’re ready.

Benefits of Going Cloud-Native

There are several important benefits of using cloud-native technologies now.

  • It delivers network agility by allowing carriers to quickly create and turn up new services, particularly private networks that will serve an increasing number of enterprises.
  • It offers automation of operations, which can dramatically reduce costs and accelerate time to market.
  • It provides network flexibility as carriers can deploy the same cloud-native architecture on their private infrastructure to serve millions of existing subscribers, for example, while spinning up new enterprise services in the cloud to avoid impacting those subscriber services.

This hybrid approach, by the way, is how we expect most carriers will consume the cloud initially. It’s critical as carriers pursue more enterprise opportunities and use cases that they continue to deliver the same or better levels of service to their existing subscriber base—it is, after all, where the bulk of their revenues come from today. Focusing on enterprise services in the cloud reduces risk and allows carriers to easily spin up new network slices for each enterprise customer. This may have been less of an issue in the past when carriers were managing private LTE networks for a handful of large customers, but it becomes unmanageable in a traditional network architecture when you have thousands of enterprise customers.

As you would expect, of course, the benefits of a cloud-native architecture are most apparent in the cloud. That’s especially true when the cloud-native architecture and the cloud architecture are managed by the same company, as they are today with Affirmed Networks and Microsoft Azure. Whether you deploy our cloud-native architecture in your own private network or in the public cloud, you’re getting the same code—tested and hardened in both environments—for a solution that is fully prepared for CI/CD environments from day one. No other company today can say that.

And, rest assured, day one is coming sooner than you think.

What Is Orchestration and Why Is It So Important?

by Affirmed Affirmed No Comments

In this second blog post about the TM Forum report on Orchestration, we’re going to look closely at the definition of orchestration in networking as it pertains to today’s virtualized mobile networks.

“In general, the definition of network orchestration is too narrow and too specific to VNF lifecycle management. Operators have backed away from talking about OSS recently, fearing it sounds ‘retro’, but it is clear we still need a top-level layer of intelligence to manage end-to-end services, which is what OSS has traditionally done.” – Ron Parker, Chief Architect, Affirmed Networks


Defining Orchestration

The basis of the TM Forum report was a broad definition of orchestration as “end-to-end service management through zero-touch (automated) provisioning, configuration, and assurance.”  After speaking with contributors to the report, what was clear is that orchestration, as it’s being implemented in live networks, is happening at multiple levels or layers, influencing virtualized and physical functions, including OSS, BSS, and NMS.

Source: Vodafone’s Kevin Brackenpool at MEF London Seminar, May 2016

Generally speaking, there are four places in an operator’s environment where some kind of orchestration can take place.



“What you’re setting out to do is abstract the complexity and drive modularity. “If you imagine a future network state where everything is virtualized – all software-defined networks – every customer might have a completely different set of virtualized functions. In a traditional approach, you’d never get over that complexity – if something were to break, you’d never be able to fix it.”  – Dr. Lester Thomas, Chief Systems Architect, Vodafone Group.

Service providers, like Vodafone, AT&T, and others, are proposing is that there will be multiple platforms within the network, each abstracting some of the complexity. As examples, Dr. Thomas points to OpenFlow abstracting the complexity of an individual router, while NETCONF and YANG abstract SDN controllers. The overarching point of abstracting at a higher level is to simplify the orchestration and utilize “intent” to manage the policies.

We’ll get to that topic in a future blog post.