Home Default


Cloud-Native Microservices: Dream A Little Dream

by Ron Parker Ron Parker No Comments

If you haven’t heard of the Cloud Native Computing Foundation (CNCF) yet, don’t worry, you will soon. Although they’ve only been around for a few years, the CNCF is already helping to shape the future by rallying developers and engineers around the importance of building cloud-native networks and applications. Key to their vision of the future is a little something known as a microservice. Now, you might not think something with such a small-sounding name could be a big deal, but you’d be mistaken. Cloud-native microservices are a very big deal if you plan on delivering applications in the future.

Microservices are one of the three building blocks that define a cloud-native approach, according to the CNCF. The other two blocks are containers and dynamic orchestration, and each of those is worth its own blog, which I hope to talk about in the near future. But I’m focusing on microservices first because I believe it has the potential to be the most impactful to our telecommunications industry.

Why Cloud-Native Microservices are Important

Before I get ahead of myself, let me backtrack by explaining what a microservice is and why it’s important. If you look at software development in the telco industry today, it’s a big and bulky affair. Software consumes a lot of processing and storage, even after virtualization, and evolves slowly over one or two major iterations per year. It’s expensive to create, test and update this software because it has so many unique parts: policy engine, security, database, etc. As a result, new telco applications are few and far between—the opposite of how most agile over-the-top competitors and cloud service providers work today.

Cloud-native microservices are essentially the moving parts that make up traditional software, unbundled and placed in containers for easier management. So instead of a single, massive software application consuming several virtual machines, you might have 30 individual microservices that are constantly updated—weekly or even daily—and run on one or two virtual CPUs for much better scalability and efficiency. More importantly, new applications can be composed from microservices in a matter of days or even hours, and torn down just as quickly without impacting the network.

Now is The Time for Telcos to Adopt Cloud-Native Microservices

If this system sounds familiar, it should—it’s the way that Amazon, Google and other visionary cloud providers have been building their applications for years. Today, most enterprises have adopted the microservices model as well, from mobile healthcare applications to online financial services. For telcos, the time is right to adopt microservices, particularly as they look ahead to the new revenue opportunities presented by 5G and technologies such as network slicing. The ability to slice traffic based on unique requirements (e.g., latency, security, scalability) will be critical as telco carriers partner with their enterprise customers to enable the next generation of 5G services. Network slicing presumes that carriers will also be able to provide tailored services across those slices, something that will require a microservices-based approach if telcos hope to compete with agile OTT/cloud companies for that opportunity.

In a very real sense, cloud-native microservices have the power to transform telcos into innovation factories. For years, telcos have been forced to dream big: if an idea couldn’t justify the huge investment in time and cost required to develop, test and launch it as a service, it was shelved. With microservices, even small dreams can come true, because the investment in time and cost is minimal, and the impact on existing network services is nominal. That fail-fast, succeed-faster model is what allows the Amazons and Googles of the world to take chances without taking risks.

It’s a world that telcos are about to discover for themselves. So go ahead, dream a little dream. It might just have a bigger impact than you ever imagined.


Talking Points Around Standards, Open Source, and the Implementation of End-to-End Orchestration

by Angela Whiteford Angela Whiteford No Comments

In this final blog post about the TM Forum report on Orchestration, we look at the role of standards and open source in end-to-end orchestration, as well as implementation strategies.

Standards and Open Source: the Rise of Three Camps.

Among standards bodies and open source groups, TM Forum is relied upon by a large majority of service providers for help with orchestration, with ETSI, MEF, and OASIS also cooperating to develop the Hybrid Network Management Platform.1

Almost two-thirds of service providers view open source as either extremely or very important for NFV and SDN deployment. Strong alignment is needed in a digital ecosystem of partners where it is important that everyone understand the requirements in the same way and work together on a common source code.

AT&T, China Mobile and Telefónica have are vying for open source leadership. By contributing ECOMP to open source, AT&T is clearly pushing for its platform to become the de facto industry standard for NFV orchestration. The company is planning to contribute the core orchestration code from ECOMP, not policy or analytics which it considers proprietary. China Mobile, which supports the OPEN-Orchestrator Project (OPEN-O), is unlikely to adopt AT&T’s ECOMP, although AT&T has said that the ECOMP code itself is vendor-neutral and the company will consider other integrators. Telefónica has contributed its virtual infrastructure manager and orchestrator to Open Source MANO (OSM), an ETSI-sponsored group. Ultimately, all the players need to realize that change will come faster if everyone works together and agree on how to federate disparate approaches.

Strategies for Implementing Orchestration

Service providers can move toward becoming platform providers by taking an enlightened strategy toward adopting orchestration. Key elements include:

Understand what end-to-end orchestration means. Orchestration goes beyond network functions virtualization (NFV). It is about automation. The NFV Orchestrator role specified in ETSI’s NFV MANO isn’t enough. To manage hybrid networks and give customers the ability to control their own services end-to-end automation is required, and that includes operational and business support systems (OSS/BSS).

Adopt a platform approach. Platform providers like Airbnb, Amazon, Google, Netflix and Uber have achieved success by providing an interface between customers and sellers. Telecom companies like BT, Orange and Vodafone see orchestration as a strategic step toward becoming platform providers for third parties, building their businesses by curating ecosystems that link end customers or users with producers of goods and/or services. Network operators need a similar model to offer the network platform as a service.

Determine where orchestration has to happen. Orchestration happens everywhere, and systems must communicate with each other and with many other physical and virtual elements to deliver a service request that the customer initiates through the customer portal. This spans the technology layer, which includes physical and virtual functions, the resource layer where functions are modeled as logical resources, the services layer where provisioning, configuration and assurance happen, and the customer layer.

Use common information models, open APIs and intent-based management. A service provider’s master service orchestrator will never have complete visibility into other providers’ networks and operational and business support systems. Service providers will automate service provisioning and management end to end by agreeing to use the same information, data models, and APIs so that orchestrators in different domains can communicate. Intent-based management abstracts the complexity of the network and uses customer intent and policy to manage it. The answer lies less in the orchestrator and more in standardizing the things that are being orchestrated.

Implement closed control loops, policy and analytics. Closing the loop means collecting and analyzing performance data to figure out how the network can be optimized and then applying policy, usually through orchestration, to make the changes in an automated way.

Design in security. Trying to bolt security features on afterwards doesn’t work. Detecting configuration-related vulnerabilities requires an orchestrator that can call on internal or external security functions and apply security policies to users or systems accessing NFV components.

Chart the migration paths. For service providers, success will depend greatly on how well they plan the transition, setting a clear migration strategy both technologically and culturally. This really comes down to learning to think like a software company.

Work toward a common goal in open source groups. Aligning around a single approach would certainly make end-to-end orchestration easier, but short of that ideal, ways must be found to federate the approaches through collaborative work on common information and data models and APIs. Developing the technology and business models needed in the world of 5G and the Internet of Everything can only happen if everyone works together.

This is the final blog in our series on the TM Forum report on Orchestration. The report confirms that orchestrating services end-to-end across virtualized and physical infrastructure is indeed a huge challenge—but not an insurmountable one.

Important Steps & Requirements for E2E Network Orchestration

by Angela Whiteford Angela Whiteford No Comments

In this fourth blog post about the TM Forum report on Orchestration, we look at key steps and architectural requirements service providers must consider as they move toward the Operations Center of the Future (OpCF).

Participants in workshops and Catalyst projects (including CIOs and VPs in networking and operations, network and systems architects, IT managers, OSS/BSS directors, and software developers) have responded to ranked seven orchestration components in order of importance.











Here are some takeaways from the report.

Standardized Patterns.

Close to forty percent of participants ranked common information models and APIs as their number one concern, while two-thirds put standardized patterns in their top-three. According to Shahar Steiff, Assistant Vice President, New Technology, PCCW Global, end-to-end service orchestration is a lot like playing with Lego blocks and Meccano parts—it’s easy until you try to combine them. Network operators must partner to deliver the services customers are demanding, but some partners don’t speak the same language. The joint MEF and TM Forum Network-as-a-service Catalyst is developing the common language, definitions, information models and APIs needed to help service providers automatically order, provision, manage and assure virtualized services across partners’ boundaries. Using Catalyst, time-to-service-delivery can be reduced from as long as three months to fewer than ten minutes, with the potential to get that down to milliseconds in future phases. Time-to-market is faster because there is no physical equipment to install. Service providers will have to agree to use the same information and data models along with APIs so that orchestrators in different domains can communicate. This, combined with intent-based management (which abstracts the complexity of the network and uses a customer’s intent and policy to manage it), allows service providers to automate provisioning and management, end to end.

While it is unlikely that the industry will coalesce around a single data model, service providers and suppliers will probably adopt a few and map them to one another. TM Forum’s high-level information model provides standard definitions for information that flows between communications service providers and their business partners, and defines a common vocabulary for implementing business processes. This reduces complexity by providing an off-the-shelf model that can be easily and quickly adopted by all parties.









API wave of the future. Dynamic APIs are the wave of the future. They can mediate connections between diverse systems, allowing the payload to vary depending on the product or service that’s being ordered, procured or managed. In the case of the Catalyst, the Open Digital API acted as a bridge between an orchestration system and the OSS/BSS, allowing suppliers to invest in a single set of APIs and use them to supply multiple products to many buyers. Conversely, buyers could also use a single API investment to integrate with many suppliers, which allows a true marketplace to form.

Dynamic APIs will likely be certified against core components. As long as any extensible parts follow the pattern as defined by the API, they will be certified. If a set of APIs are extended in the same way by multiple service providers the extension may be integrated into the core.

APIs are so important that nine of the world’s largest operators have officially adopted TM Forum’s suite of Open APIs for digital service management, committing to adopt TM Forum Open APIs as a foundational component of their IT architectures. More service providers are expected to announce their endorsement of them shortly.

Control Loops and Assurance

Close to half of TM Forum respondents ranked autonomic control loops and service assurance high in their requirements for orchestration. By collecting and analyzing performance data, figuring out how the3 network can be optimized and then applying policy in an automated way, service providers can achieve zero-touch provisioning and management. Orchestration allows operators to activate services on any vendor’s device in a standardized way.

Intent-Based Management

Close to half of respondents put intent-based management in their top three architectural requirements for orchestration. Customers access a self-service portal to specify the service they want to use and the desired end state. An abstraction describes what the service is supposed to do and the agreed terms for QoS, then the orchestration system provisions and manages the required service automatically. The goal is to build a Hybrid Network Management Platform that provides for modularity, flexibility and adaptability.



Orchestrating a way forward.

Service providers must set a clear migration strategy—both technologically and culturally—in order to transition to end-to-end orchestration and a platform approach. Thirty-eight percent of respondents said setting a clear technology path was a top-three consideration for orchestration, while twenty-four percent ranked setting a clear cultural migration path as important. Orchestration may seem to be mainly a technology challenge, but it won’t happen without learning to think like a software company. That means focusing on services more than functions and learning to allow for failure – something that is in complete opposition to the way network operators have done business since their inception.

Still, a majority of service providers agree they need to become much more software-savvy. Setting a clear cultural migration strategy should be a top priority for all.

NFV World Congress: The Future of Virtualized Networks is Now

by Angela Whiteford Angela Whiteford No Comments

NFV World Congress took place two weeks ago with the industry’s leading participants converging in the Silicon Valley to discuss the many ways virtualized infrastructures are providing transformational benefits to operators around the world. As the company with more live NFV deployments than anyone in the industry (50+), Affirmed Networks was front and center, participating in several discussions on the current state of NFV.

Having earned a leadership position, representatives from Affirmed Networks shared their views on several topics including the importance of ecosystems and standards, and the misconception that operators need to wait for 5G to begin experiencing the transformational benefits of NFV.

Affirmed Networks was part of two panels, providing perspectives on the importance that ecosystems and interoperability play in ensuring success for Communications Service Providers (CSPs) as they continue to deploy virtualized architectures.

Angela Whiteford, VP Product Management and Marketing, on panel “Revealing how mobile operators can build 5G capabilities on 4G network infrastructure”

Angela Whiteford, VP Product Management and Marketing, on panel “Revealing how mobile operators can build 5G capabilities on 4G network infrastructure”

Affirmed also provided attendees with a glimpse into what is available today for CSPs facing exponential traffic growth and flat to declining revenues. Through a detailed discussion on the capabilities of current virtualized offerings, Angela Whiteford, Affirmed Networks’ Vice President of Product Management and Marketing, shared how leading CSPs are leveraging key functionality in the areas of automation, real-time analytics, network slicing, and decomposed architectures, outlining the tangible benefits these capabilities can have on overall profitability now.

Throughout the event the widespread consensus and support behind NFV was evident. With CSPs squarely in the “when” not “if” camp, the only remaining question on the table for operators seems to be: why wait when the future is here now?

Affirmed Networks Named Finalist in Mobile World Congress “Glomo Awards” for Second Consecutive Year

by Angela Whiteford Angela Whiteford No Comments

We are pleased to announce that Affirmed Networks has, for the second consecutive year, been named a finalist as part of the esteemed “Glomo Awards” program that takes place as part of Mobile World Congress (MWC) annually.

Filled with a week full of keynotes, announcements, demonstrations and infinite cab lines, MWC represents an action-packed week for all involved. Each year, the Glomo Awards represent an exciting aspect of the show, with judges only selecting a handful of finalists from thousands of compelling submissions across the program’s award categories.

In 2016, Affirmed Networks was named a finalist for the first time, and while we did not come home with the “hardware”, the credibility and recognition we received through being part of the program was rewarding in and of itself.

This year, we are fortunate to have been named a finalist in the category of “Best Mobile Technology.” Specifically, within this category, Affirmed is a finalist with five other deserving companies in the “Best Technology Enabler” area. Once again, we are thrilled to be part of the conference program, and hope that finalists from across all categories will have the type of positive experience we continue to have as part of this event and program as a whole.

We would like to thank the GSMA for their overall organization of MWC, and the judges who have taken the time to evaluate thousands of applications. None of this would be possible, however, without the support of our customers, and the tremendously dedicated team we have as part of the Affirmed Networks family.

Collectively, we have our fingers crossed that we win the top award this year (and also hope it fits in the overhead bin on the flight home).

Wish us luck. We’ll keep you posted on all of our efforts and activities at the event.