Home Default

Default

Going Native: Why Carriers Need to Embrace Cloud-Native Technologies Right Now

by Ron Parker Ron Parker No Comments

The cloud isn’t an if. It’s a when. And it will probably start like this: a few forward-thinking carriers will begin moving large portions of their network functions into the cloud and realize that the savings are almost shocking. Not the 2X magnitude CapEx savings we’ve seen from replacing physical servers with cloud-based servers, but a 10X magnitude CapEx+OpEx savings that will occur when carriers move network hardware, applications and a significant portion of operations into the cloud. And when that happens, the rush to the cloud will be deafening.

Right now, the migration to the cloud seems relatively restrained, almost quiet. 5G still feels far off in the future. (In reality, 5G’s arrival is imminent.) Meanwhile, carriers are looking to get more mileage out of their existing infrastructure, and replacing it with cloud servers isn’t a compelling narrative. The compulsion will come when early adopters start proving that the cloud is a game-changer. At that moment, carriers will need to move quickly or be left behind. If they don’t already have a cloud-ready network architecture in place, they can forget about coming in first, second or third place in the race to deliver 5G services. That may sound like a dire prediction, but it doesn’t have to be.

 

What Cloud Native Technologies Hold for Carriers

A cloud-native network architecture can be had today—without ripping and replacing current infrastructure and without waiting for 5G to realize a return on investment. We see this as a phased approach to 5G: start investing in cloud-native technologies capabilities, with examples like control and user plane separation (CUPS), containers, Kubernetes and cloud-native network functions (CNFs) today to run your existing network more efficiently, and seamlessly shift those capabilities into the cloud when you’re ready.

Benefits of Going Cloud-Native

There are several important benefits of using cloud-native technologies now.

  • It delivers network agility by allowing carriers to quickly create and turn up new services, particularly private networks that will serve an increasing number of enterprises.
  • It offers automation of operations, which can dramatically reduce costs and accelerate time to market.
  • It provides network flexibility as carriers can deploy the same cloud-native architecture on their private infrastructure to serve millions of existing subscribers, for example, while spinning up new enterprise services in the cloud to avoid impacting those subscriber services.

This hybrid approach, by the way, is how we expect most carriers will consume the cloud initially. It’s critical as carriers pursue more enterprise opportunities and use cases that they continue to deliver the same or better levels of service to their existing subscriber base—it is, after all, where the bulk of their revenues come from today. Focusing on enterprise services in the cloud reduces risk and allows carriers to easily spin up new network slices for each enterprise customer. This may have been less of an issue in the past when carriers were managing private LTE networks for a handful of large customers, but it becomes unmanageable in a traditional network architecture when you have thousands of enterprise customers.

As you would expect, of course, the benefits of a cloud-native architecture are most apparent in the cloud. That’s especially true when the cloud-native architecture and the cloud architecture are managed by the same company, as they are today with Affirmed Networks and Microsoft Azure. Whether you deploy our cloud-native architecture in your own private network or in the public cloud, you’re getting the same code—tested and hardened in both environments—for a solution that is fully prepared for CI/CD environments from day one. No other company today can say that.

And, rest assured, day one is coming sooner than you think.

What Is Network Orchestration and Why Is It So Important?

by Affirmed Affirmed No Comments

In this second blog post about the TM Forum report on Orchestration, we’re going to look closely at the definition of orchestration in networking as it pertains to today’s virtualized mobile networks.

“In general, the definition of network orchestration is too narrow and too specific to VNF lifecycle management. Operators have backed away from talking about OSS recently, fearing it sounds ‘retro’, but it is clear we still need a top-level layer of intelligence to manage end-to-end services, which is what OSS has traditionally done.” – Ron Parker, Chief Architect, Affirmed Networks

 

Defining Network Orchestration

The basis of the TM Forum report was a broad definition of network orchestration as “end-to-end service management through zero-touch (automated) provisioning, configuration, and assurance.”  After speaking with contributors to the report, what was clear is that orchestration, as it’s being implemented in live networks, is happening at multiple levels or layers, influencing virtualized and physical functions, including OSS, BSS, and NMS.

Source: Vodafone’s Kevin Brackenpool at MEF London Seminar, May 2016

Generally speaking, there are four places in an operator’s environment where some kind of orchestration can take place.

 

Why Do We Need Automation?

“What you’re setting out to do is abstract the complexity and drive modularity. “If you imagine a future network state where everything is virtualized – all software-defined networks – every customer might have a completely different set of virtualized functions. In a traditional approach, you’d never get over that complexity – if something were to break, you’d never be able to fix it.”  – Dr. Lester Thomas, Chief Systems Architect, Vodafone Group.

Service providers, like Vodafone, AT&T, and others, are proposing is that there will be multiple platforms within the network, each abstracting some of the complexity. As examples, Dr. Thomas points to OpenFlow abstracting the complexity of an individual router, while NETCONF and YANG abstract SDN controllers. The overarching point of abstracting at a higher level is to simplify the network orchestration and utilize “intent” to manage the policies.

We’ll get to that topic in a future blog post.

Talking Points Around Standards, Open Source, and the Implementation of End-to-End Orchestration

by Affirmed Affirmed No Comments

In this final blog post about the TM Forum report on Orchestration, we look at the role of standards and open source in end-to-end orchestration, as well as implementation strategies.

Standards and Open Source: the Rise of Three Camps.

Among standards bodies and open source groups, TM Forum is relied upon by a large majority of service providers for help with orchestration, with ETSI, MEF, and OASIS also cooperating to develop the Hybrid Network Management Platform.1

Almost two-thirds of service providers view open source as either extremely or very important for NFV and SDN deployment. Strong alignment is needed in a digital ecosystem of partners where it is important that everyone understand the requirements in the same way and work together on a common source code.

AT&T, China Mobile and Telefónica have are vying for open source leadership. By contributing ECOMP to open source, AT&T is clearly pushing for its platform to become the de facto industry standard for NFV orchestration. The company is planning to contribute the core orchestration code from ECOMP, not policy or analytics which it considers proprietary. China Mobile, which supports the OPEN-Orchestrator Project (OPEN-O), is unlikely to adopt AT&T’s ECOMP, although AT&T has said that the ECOMP code itself is vendor-neutral and the company will consider other integrators. Telefónica has contributed its virtual infrastructure manager and orchestrator to Open Source MANO (OSM), an ETSI-sponsored group. Ultimately, all the players need to realize that change will come faster if everyone works together and agree on how to federate disparate approaches.

Strategies for Implementing Orchestration

Service providers can move toward becoming platform providers by taking an enlightened strategy toward adopting orchestration. Key elements include:

Understand what end-to-end orchestration means. Orchestration goes beyond network functions virtualization (NFV). It is about automation. The NFV Orchestrator role specified in ETSI’s NFV MANO isn’t enough. To manage hybrid networks and give customers the ability to control their own services end-to-end automation is required, and that includes operational and business support systems (OSS/BSS).

Adopt a platform approach. Platform providers like Airbnb, Amazon, Google, Netflix and Uber have achieved success by providing an interface between customers and sellers. Telecom companies like BT, Orange and Vodafone see orchestration as a strategic step toward becoming platform providers for third parties, building their businesses by curating ecosystems that link end customers or users with producers of goods and/or services. Network operators need a similar model to offer the network platform as a service.

Determine where orchestration has to happen. Orchestration happens everywhere, and systems must communicate with each other and with many other physical and virtual elements to deliver a service request that the customer initiates through the customer portal. This spans the technology layer, which includes physical and virtual functions, the resource layer where functions are modeled as logical resources, the services layer where provisioning, configuration and assurance happen, and the customer layer.

Use common information models, open APIs and intent-based management. A service provider’s master service orchestrator will never have complete visibility into other providers’ networks and operational and business support systems. Service providers will automate service provisioning and management end to end by agreeing to use the same information, data models, and APIs so that orchestrators in different domains can communicate. Intent-based management abstracts the complexity of the network and uses customer intent and policy to manage it. The answer lies less in the orchestrator and more in standardizing the things that are being orchestrated.

Implement closed control loops, policy and analytics. Closing the loop means collecting and analyzing performance data to figure out how the network can be optimized and then applying policy, usually through orchestration, to make the changes in an automated way.

Design in security. Trying to bolt security features on afterwards doesn’t work. Detecting configuration-related vulnerabilities requires an orchestrator that can call on internal or external security functions and apply security policies to users or systems accessing NFV components.

Chart the migration paths. For service providers, success will depend greatly on how well they plan the transition, setting a clear migration strategy both technologically and culturally. This really comes down to learning to think like a software company.

Work toward a common goal in open source groups. Aligning around a single approach would certainly make end-to-end orchestration easier, but short of that ideal, ways must be found to federate the approaches through collaborative work on common information and data models and APIs. Developing the technology and business models needed in the world of 5G and the Internet of Everything can only happen if everyone works together.

This is the final blog in our series on the TM Forum report on Orchestration. The report confirms that orchestrating services end-to-end across virtualized and physical infrastructure is indeed a huge challenge—but not an insurmountable one.

Important Steps & Requirements for E2E Network Orchestration

by Affirmed Affirmed No Comments

In this fourth blog post about the TM Forum report on Orchestration, we look at key steps and architectural requirements service providers must consider as they move toward the Operations Center of the Future (OpCF).

Participants in workshops and Catalyst projects (including CIOs and VPs in networking and operations, network and systems architects, IT managers, OSS/BSS directors, and software developers) have responded to ranked seven orchestration components in order of importance.

1

 

 

 

 

 

 

 

 

 

Here are some takeaways from the report.

Standardized Patterns.

Close to forty percent of participants ranked common information models and APIs as their number one concern, while two-thirds put standardized patterns in their top-three. According to Shahar Steiff, Assistant Vice President, New Technology, PCCW Global, end-to-end service orchestration is a lot like playing with Lego blocks and Meccano parts—it’s easy until you try to combine them. Network operators must partner to deliver the services customers are demanding, but some partners don’t speak the same language. The joint MEF and TM Forum Network-as-a-service Catalyst is developing the common language, definitions, information models and APIs needed to help service providers automatically order, provision, manage and assure virtualized services across partners’ boundaries. Using Catalyst, time-to-service-delivery can be reduced from as long as three months to fewer than ten minutes, with the potential to get that down to milliseconds in future phases. Time-to-market is faster because there is no physical equipment to install. Service providers will have to agree to use the same information and data models along with APIs so that orchestrators in different domains can communicate. This, combined with intent-based management (which abstracts the complexity of the network and uses a customer’s intent and policy to manage it), allows service providers to automate provisioning and management, end to end.

While it is unlikely that the industry will coalesce around a single data model, service providers and suppliers will probably adopt a few and map them to one another. TM Forum’s high-level information model provides standard definitions for information that flows between communications service providers and their business partners, and defines a common vocabulary for implementing business processes. This reduces complexity by providing an off-the-shelf model that can be easily and quickly adopted by all parties.

2

 

 

 

 

 

 

 

API wave of the future. Dynamic APIs are the wave of the future. They can mediate connections between diverse systems, allowing the payload to vary depending on the product or service that’s being ordered, procured or managed. In the case of the Catalyst, the Open Digital API acted as a bridge between an orchestration system and the OSS/BSS, allowing suppliers to invest in a single set of APIs and use them to supply multiple products to many buyers. Conversely, buyers could also use a single API investment to integrate with many suppliers, which allows a true marketplace to form.

Dynamic APIs will likely be certified against core components. As long as any extensible parts follow the pattern as defined by the API, they will be certified. If a set of APIs are extended in the same way by multiple service providers the extension may be integrated into the core.

APIs are so important that nine of the world’s largest operators have officially adopted TM Forum’s suite of Open APIs for digital service management, committing to adopt TM Forum Open APIs as a foundational component of their IT architectures. More service providers are expected to announce their endorsement of them shortly.

Control Loops and Assurance

Close to half of TM Forum respondents ranked autonomic control loops and service assurance high in their requirements for orchestration. By collecting and analyzing performance data, figuring out how the3 network can be optimized and then applying policy in an automated way, service providers can achieve zero-touch provisioning and management. Orchestration allows operators to activate services on any vendor’s device in a standardized way.

Intent-Based Management

Close to half of respondents put intent-based management in their top three architectural requirements for orchestration. Customers access a self-service portal to specify the service they want to use and the desired end state. An abstraction describes what the service is supposed to do and the agreed terms for QoS, then the orchestration system provisions and manages the required service automatically. The goal is to build a Hybrid Network Management Platform that provides for modularity, flexibility and adaptability.

4

 

Orchestrating a way forward.

Service providers must set a clear migration strategy—both technologically and culturally—in order to transition to end-to-end orchestration and a platform approach. Thirty-eight percent of respondents said setting a clear technology path was a top-three consideration for orchestration, while twenty-four percent ranked setting a clear cultural migration path as important. Orchestration may seem to be mainly a technology challenge, but it won’t happen without learning to think like a software company. That means focusing on services more than functions and learning to allow for failure – something that is in complete opposition to the way network operators have done business since their inception.

Still, a majority of service providers agree they need to become much more software-savvy. Setting a clear cultural migration strategy should be a top priority for all.

NFV World Congress: The Future of Virtualized Networks is Now

by Affirmed Affirmed No Comments

NFV World Congress took place two weeks ago with the industry’s leading participants converging in the Silicon Valley to discuss the many ways virtualized infrastructures are providing transformational benefits to operators around the world. As the company with more live NFV deployments than anyone in the industry (50+), Affirmed Networks was front and center, participating in several discussions on the current state of NFV.

Having earned a leadership position, representatives from Affirmed Networks shared their views on several topics including the importance of ecosystems and standards, and the misconception that operators need to wait for 5G to begin experiencing the transformational benefits of NFV.

Affirmed Networks was part of two panels, providing perspectives on the importance that ecosystems and interoperability play in ensuring success for Communications Service Providers (CSPs) as they continue to deploy virtualized architectures.

Angela Whiteford, VP Product Management and Marketing, on panel “Revealing how mobile operators can build 5G capabilities on 4G network infrastructure”

Angela Whiteford, VP Product Management and Marketing, on panel “Revealing how mobile operators can build 5G capabilities on 4G network infrastructure”

Affirmed also provided attendees with a glimpse into what is available today for CSPs facing exponential traffic growth and flat to declining revenues. Through a detailed discussion on the capabilities of current virtualized offerings, Angela Whiteford, Affirmed Networks’ Vice President of Product Management and Marketing, shared how leading CSPs are leveraging key functionality in the areas of automation, real-time analytics, network slicing, and decomposed architectures, outlining the tangible benefits these capabilities can have on overall profitability now.

Throughout the event the widespread consensus and support behind NFV was evident. With CSPs squarely in the “when” not “if” camp, the only remaining question on the table for operators seems to be: why wait when the future is here now?