Home BLOG Page 4

BLOG

The Age of the Subscription Economy

by Sean O’Donoghue Sean O’Donoghue No Comments

As enterprises move from physical products to digital services, they are redefining the business model relationship with the end customer, creating what we now know as the subscription model. Subscription models are one of the benefits of 5g for consumers – they can help deliver meaningful 5G consumer and enterprise services to consumers. Many of today’s most popular digital 5g services like video streaming, music streaming, transportation, newspapers, and magazines use a subscription model. The trend of 5g streaming will grow exponentially in the years ahead.

The rapid adoption of subscription-based services is not something new. Companies like Spotify introduced the idea of subscriber-based music streaming to consumers over a decade ago; today, Spotify boasts over 270+ million active users. In the enterprise domain, companies such as Oracle have also successfully made the transition from on-premise perpetual software to a cloud-based SaaS business model for infrastructure and software applications.

Meanwhile, in the communications industry, service providers are looking to enhance their digital offerings and business models to become the digital service providers (DSPs) of tomorrow. Streaming services are a big opportunity for consumers, and DSPs are embracing subscription-based models to procure, deliver, and monetize these digital services.

 

Subscription Models and the Mobile Network

This shift to subscription-based models is reflected in Affirmed Networks’ Mobile Network as a Service and Affirmed Cloud Edge offerings, which are designed to help DSPs rapidly deploy their mobile network infrastructure, accelerate service innovation and reduce costs in an open public cloud infrastructure.

Mobile Network as a Service Capabilities

With Mobile Network as a Service, DSPs can:

  • Rapidly and cost-effectively test new markets and services by leveraging public cloud infrastructure and economies of scale;
  • Instantly instantiate new services and functions such as deep packet inspection, media optimization, and network address translation on a single platform;
  • Focus on growing the digital services business opportunity while a partner delivers the underlying platform;
  • Eliminate upfront CapEx costs required for hardware and data center costs while controlling OpEx costs thereafter;
  • Provide “just in time” network scaling for the capacity that DSPs actually need, rather than “just in case” capacity planning that often requires expensive over-provisioning of network resources;
  • Implement local breakout services to reduce backhauling costs and improve customer experiences.

 

Procuring, delivering, and monetizing mobile network infrastructure has evolved from hardware-based solutions to perpetual software-based models and, now, to SaaS-based models. DSPs are slowly changing procurement procedures and other internal practices as they move to SaaS-based solutions delivered by innovative partners. DSPs understand the value creation that SaaS enables in their business by rapidly delivering a tailored solution for customers, generating recurring revenues, and reducing customer churn. For the end customer, this all boils down to convenience.

 

A Win-Win for Consumers & DSPs

A digital service that is personalized to the needs of the individual customer in terms of commitment, payments, and accessibility is paramount. The evolution of traditional models to SaaS-based models is necessary to meet changing customer behaviors and foster innovative ways to generate new revenue streams. The technology and the commercial models are now available to DSPs to help them rapidly prototype, innovate and deliver meaningful 5G consumer and enterprise services. At the same time, end consumers get more personalized service. It’s a win-win scenario that everyone can subscribe to.

The Democratization of IT and the Network: Are You Ready?

by Sean O’Donoghue Sean O’Donoghue No Comments

Many years ago, I worked at a network equipment provider, where I shared an office with the mobile core team. And every morning, as I walked through the reception area, I would pass a pair of towering, refrigerator-sized CGSN and SGSN nodes. Each node could deliver wireless access protocol (WAP) service to about 50,000 subscribers. At the same time, companies like VMware and Sun Microsystems were just starting to take the concept of IT virtualization mainstream.

How times have changed. And it’s because of the Democratization of IT.

Today, IT applications are being rewritten to adopt cloud-native principles and deployed in hybrid cloud environments that are maintained with a DevOps model. Yet while IT applications for functional domains such as Online Channels, Customer Relationship Management, Sales Force Automation, and Human Capital Management have already made the transition to a cloud-native design, network applications have only recently begun to embark on virtualization and the first wave of digitalization.

The Democratization of Networks

We hear a lot of debate in the industry around the impact of 5G, but I believe another and equally important topic is being ignored: the democratization of the network. Never before has the network been more accessible. The democratization of technology and the network is taking place, led by open-based technologies and evolving commercial models such as open, web-based interfaces, cloud-native architectures, and public cloud deployments.

Few people would argue the necessity of cloud-native network functions to deliver service and operational agility, but what lessons can the network community learn from the evolution of on-premise software to virtualized software and, finally, to truly cloud-native software? Will the traditional barriers between networks and IT collapse until all applications are deployed on horizontal platforms? I believe the answer is a definitive Yes.

Today, most service providers have separate IT and network teams that reflect the historical separation of the two technologies. This is rapidly changing, however. As service providers come to realize that the network and IT functions can be developed, deployed and managed from a common open-source infrastructure, the heated discussions between IT and Networking over topics such as converged charging and prepaid platforms have become a distant memory.

There are still points for discussion, of course; for example, the need for an infrastructure that supports both CPU-intensive and I/O-intensive workloads. But there are many similarities and synergies that can be derived by jointly designing a network and IT applications in areas such as security, reliability, and resiliency. And, really, shouldn’t we be working together to create a common set of requirements, platforms, and processes to develop, manage and maintain both IT and network applications?

The Solution for Service Providers

The solution, I believe, lies in the creation of a common platform and a common team to architect, deploy and operate the core applications of the future. Labeling an application as an “IT application” or a “network application” is less important than creating an underlying platform that has the built-in flexibility and adaptability to serve the future needs of both IT and networking. Service providers cannot count on traditional vendors to deliver this future. These vendors have a legacy hardware business to protect and are saddled with legacy software that cannot fulfill the basic requirements of cloud-native.

What service providers need is choice. For example, they should be able to choose whether they deploy network applications on bare metal, in a private cloud, in a public cloud or in a hybrid cloud running on a common platform. Affirmed is committed to delivering more choices to service providers through the industry’s only truly cloud-native mobile core solution built on a common, open-source platform. It’s much more than a platform for virtualization; it’s a platform for innovation.

Think about it: How will service providers operate networks and deliver network assurance as the cloudification of the network goes mainstream? Wouldn’t it be more cost-efficient to have a single observability platform (e.g., Grafana) for both IT and network functions? From a service agility perspective, IT has been using microservices for years to rapidly deliver new software functionality and capabilities. Now, for the first time, IT and network functions can take advantage of the same technologies while using the same tools, processes, and people – thanks to democratization.

Service providers can now leverage a common platform for all their enterprise applications, whether they’re IT or network applications. This has the benefit of improving agility and operations while reducing costs. Of course, to do this, the platform has to be flexible enough to support the various workload configurations, evolve as the open-source tools evolve and support multiple use cases and deployment models.

It’s the promise of new 5G use cases, operational models and cost efficiencies that will drive service providers to review their current platform choices and look for a better solution. On the other side of that solution are a democracy of IT and network teams working together on a shared goal of a better future. Affirmed Networks is ready for that future right now. Are you?

The End of the ISV Era

by Ron Parker Ron Parker No Comments

It’s no secret that cloud services are on the rise. Gartner estimates that businesses will spend $266.4 billion on cloud services this year, growing to $354.6 billion by 2022. Why is cloud consumption rising so quickly? Because free-market economies hate inefficiencies, and the cloud is all about efficiency. But that’s not to suggest that cloud adoption didn’t create ripples in the industry, as we see when we look at its history and the impact on ISVs (Individual Software Vendors).

 

History of Cloud Services

In the early days of cloud, enterprises were primarily attracted to the cloud for data center outsourcing: servers, switches, storage, infrastructure-oriented software, and the expertise to manage it all. The premise was that the cloud provider could offer better economies of scale by hosting multiple customer data centers and streamlining their own operations through in-house expertise, particularly around automation. In exchange for a monthly fee, enterprises could eliminate the hardware, software and IT resources associated with maintaining their own data center environment.

Infrastructure Abstraction

Over time, enterprises (and their software suppliers) were able to focus more on the applications and less on the integration points of their legacy infrastructure—the value of infrastructure abstraction was profitably exploited. This soon led to enterprises embracing managed Software as a Service (SaaS) offerings for the supporting systems of their mission-critical applications.

Examples of this include observability-oriented systems (e.g., ElasticSearch, Logstash, Kibana, Prometheus, Grafana) and database systems (e.g., MongoDB, Cassandra). With the cloud provider now offering these systems as managed services, the enterprise no longer needed to worry about deploying and supporting these systems; they could just order them through the cloud provider’s portal.

As all this lower-layer abstraction was happening, however, the remaining applications and the business logic they contained grew more complex—so complex, in fact, that the traditional model of licensing application software to another organization for deployment and operation began to disappear. Modern software is consumed in one of two ways and operated only in one way. Operationally speaking, the organization that builds the software, operates it. Enterprises consume the software they write directly and consume anything else as a service via APIs over the Internet. It should be clear that the API service provider is indeed operating the software that they wrote.

Businesses Defined by Software

While this change was happening, an even more important transformation was taking place in the software industry. A growing number of businesses became defined by software, which created a greater need for improved agility. It was no longer acceptable to deliver only two or four upgrades per year; instead, businesses needed tens, hundreds and even thousands of software updates per day.

CI/CD (Continuous Integration/Continuous Deployment)

In 2010, responding to this need, Jez Humble and David Farley devised an extension to the existing concept of continuous integration, calling it Continuous Integration/Continuous Deployment (CI/CD). CI/CD proposed combining the previously separate functions of development and operations into a single function, DevOps. The DevOps team would be responsible for feature development, test automation and production operations. CI/CD maintained that only by breaking down internal barriers could an enterprise reach the point of executing 10 or more updates per day.

There was only one problem with CI/CD: existing software architectures were poorly suited to frequent releases because virtually all software was released monolithically. The software code may have been modularized, but the release needed to include all the functionality, fully tested. How fast could enterprises update a complex, monolithic application?

Microservices

As enterprises were struggling to answer this question, the idea of microservices—which had been floating around since 2007—began to take hold in architectural circles in 2011. Microservices maintained the idea that, by breaking larger applications into bite-size pieces that were developed and released independently, application development teams could release tens, hundreds and even thousands of software updates per day using a fully automated CI/CD pipeline.

This meant that no human intervention would be required between the time the developer committed the code and the time they ran the code in a production environment. Microservices—particularly stateless microservices—bring their own complexities, however: API version controls, DB schema controls, observability, and more. Fortunately, in CI/CD’s DevOps team model, all the necessary expertise is contained in a single group.

 

The Impact on ISVs

So how does all this impact ISVs? Remember, these are the entities that produce applications and license them for use by others. Whether the licensing is annual or perpetual, the main issue is that the purchaser is ultimately responsible for the deployment and operation of that software. ISVs often supplement their licensing with extensive professional services and training as a means to achieve the requisite knowledge transfer. But that knowledge transfer is never complete, and the ultimate experts on the vendor’s software remain in the vendor’s organization.

What’s the natural consequence of this? The customer consumes hosted or fully managed services. It is a change in the way telcos have done things in the past, and change is never easy, but the efficiency and agility benefits of moving to a modern, cloud-native model can have a profoundly positive impact on telcos going forward.

If Data is the New Oil, Why Have Your Mobile Network Data Analytics Efforts Stalled?

by Sean O’Donoghue Sean O’Donoghue No Comments

It’s been said that data is the new oil. If that’s true, communications service providers (CSPs) have struck it rich. Their networks collect a vast amount of data on customers, all of which can be used to deliver better customer experiences and new, revenue-generating services.  Mobile network data analytics is the opportunity that CSPs have waited for, as they look to transform their business model from communications providers to digital service providers.

In order to deliver better digital engagement and personalized experiences, CSPs need more than real-time accurate data. They need robust mobile network analytics to gain valuable customer insights, identify operational efficiencies, predict future demand and create new services that customers want. There’s just one small problem with all that: Very few CSPs are actually analyzing their network data to do any of those things.

Shortcomings on Current Approaches to Network Analytics

From the beginning, mobile networks were designed as tightly coupled systems that featured hardware-based probes to collect data and deliver insights. With the move to NFV-based networks, the physical probe paradigm became obsolete. Physical probes didn’t scale well, and they weren’t designed to monitor virtual infrastructures, which are dynamic in nature.

To address these shortcomings, passive virtual probes were introduced. These were implemented as separate network functions but, as network traffic loads increased, virtual probes began to take a heavy toll on network performance. A virtual packet gateway, for example, might spend more than half its resources just performing data copy operations for the passive virtual probe.

As if cutting performance in half weren’t bad enough, CSPs now had to double the amount of hardware just to get the same performance from the packet gateway and other functions. Faced with the added complexity and cost of virtual probes, many CSPs opted to either abandon mobile network analytics initiatives altogether or limit them to very specific areas such as service assurance. And so the opportunity to drive new revenue streams fueled by network data analytics was lost.

A New Era of Integrated Virtual Probes and Analytics Arrives

Affirmed Networks did more than radically change how mobile networks were designed and delivered; we also pioneered the first integrated virtual probe and analytics solution. With Affirmed’s virtual probe solution, vProbe, CSPs can get real-time mobile network analytics and intelligence in a highly scalable manner without the performance degradation that results with physical or traditional virtual probes.

How did we create a network probe solution that could scale simply across global networks and still cut costs by more than half? By following these five guiding design principles:

1. Integrate the probing function with the network function.

Most virtual probes add packet latency, complexity, and cost – and drain computer resources – because they add an extra step in the network as data packets move from the network function to the probe function. But the best source of network data is the actual network function itself, so we integrated vProbe with it. Now, as the network functions scales, the probing function scales with it, cutting probing costs in half.

2. Simplify with single-touch packet handling.

You can think of a data packet as information sealed in an envelope, complete with a destination address. Most systems open the packet each time they need to perform a function such as content inspection or optimization. Affirmed vProbe opens the packet once to perform everything – heuristics, video optimization, content filtering, CG-NAT forwarding, etc. – then closes the packet and sends it on its way. Single touch is simply more efficient.

3. Create finely grained, intelligent event data records.

In order to better harness, mine and gain insights from the information collected by a probe, data records must be delivered in an open, consistent and granular fashion. This applies to both control plane and user plane information, on a per-flow and individual customer level. Finely grained mobile network data leads to more precision.

4. Enable real-time analytics.

As customers access content, watch videos, and make calls, they’re sharing invaluable, real-time information about their behavior and interests, as well as about your network – information that can be used to capitalize on revenue-generating opportunities, identify network performance bottlenecks, ferret out fraud and more. The reality is that opportunities and issues happen in real-time, and you either have the information you need to drive intelligent decision-making, or you don’t.

5. Embrace open standards.

Architectural openness is vital for modern software platforms. Affirmed vProbe provides open access to session information using Google’s standards-based protocol buffers to create data records. These can easily be integrated with a variety of third-party network analytics tools to deliver real-time insight into network operations, network security, network planning, and marketing activities.

 

Digital leaders leverage data to deliver engaging customer experiences, which is why they’re the world’s most valued companies. Legacy infrastructure and approaches, meanwhile, have stunted CSPs’ ability to capture, see and act on the wealth of information that flows through their networks. To take advantage of the next generation of business opportunities such as IoT, 5G services and AI-driven customer engagement, real-time mobile network analytics are an absolute must. With Affirmed vProbe, CSPs no longer have an excuse not to dive deeply into data analytics.

In a world where everyone is competing to deliver digital services, CSPs need to leverage their data for a competitive advantage. Now, more than ever, it’s oil or nothing.

10 Tips for a Successful NFV Deployment

by Affirmed Affirmed No Comments

Over the past eight years, Affirmed Networks has helped leading service providers successfully transition to NFV based architectures and realize exceptional returns. Along the way, we’ve learned some valuable NFV deployment lessons on how providers can avoid underwhelming NFV results and realize the technology’s full transformative benefits.

 

Some telecommunications network equipment vendors think that Network Functions Virtualization (NFV) is a byproduct of 5G and that the one shouldn’t arrive before the other. Reality says otherwise; many communication service providers are deriving value from NFV initiatives right now, primarily in the form of CAPEX/OPEX savings and network agility. Yet many service providers, in our experience, still only tap into 30 to 40 percent of NFV’s true potential. 

In terms of NFV challenges, what is it that typically holds providers back from realizing NFV’s full potential? There is no single reason for preventing NFV’s full potential; rather, it’s likely a combination of missed opportunities and misunderstanding as to NFV’s architectural requirements. 

Affirmed has helped many service providers transition to NFV-based architectures and realize the great returns. We’ve learned some valuable lessons to pass along to  providers. Our tips can help service providers avoid the challenges of an NFV deployment and underwhelming NFV results and realize the technology’s full transformative benefits.

 

Our Tips for NFV Deployment Success

To help CSPs across the world ensure success for NFV deployments as they prepare for 5G, we have identified 10 tips to overcome challenges  for a successful NFV deployment in 2020. 

  1. Not all hardware is created equal
  2. The packet forwarding architecture and hypervisor need attention too
  3. Don’t oversubscribe the application
  4. NFV isn’t a simple plug-and-play solution
  5. Redundancy needs to be built into the application and not just the NFVI architecture
  6. Telecom applications require built-in load balancing
  7. VMs need to scale independently
  8. The packet forwarding architecture and hypervisor need attention too
  9. Ownership is important
  10. One EMS is better than two (or three)

 

#1 Not all hardware is created equal:

There is a belief that you can run virtualized telecom applications on any vendor’s server – however, this is only a half-truth. There is one hardware dependency that always needs to be considered: the hardware must have a network interface card (NIC) that supports the data plane development kit (DPDK) in order to function properly. 

In our experience, we’ve found it’s often better to bundle the virtual network function (VNF) with hardware providers that support this NIC requirement rather than deploy the VNF in a hardware-agnostic environment.

 

#2 The packet forwarding architecture and hypervisor need attention too

While choosing the appropriate hardware can aid in the performance of your virtualized network, the packet forwarding architecture requires attention as well. The main function of the evolved packet core (EPC) is to move a large number of packets through the data plane. This means you need very high performance in the data plane. 

Typically, packets travel through the vSwitch function within the hypervisor, which queues them for the virtual machines (VMs). The vSwitch function uses a great deal of computing power, which limits the performance that VMs can achieve. This creates a need for single-root input/output virtualization (SR-IOV) technology to get around this limitation. SR-IOV technology allows the packets to bypass the hypervisor layer and travel directly from the PCI on the server to the VMs, giving the VMs full use of all CPU power and significantly increasing performance.

While SR-IOV is not a requirement for an NFV deployment, its role and impact are sometimes misunderstood by service providers. If a provider requires very high throughput, then SR-IOV is necessary. Furthermore, applications are very sensitive to how the hypervisor is configured and the specific settings it uses. In order to reach maximum performance, service providers must also tune the hypervisor to meet the specific requirements of their application (e.g., tuning how the hypervisor schedules the CPUs, CPU pinning, etc.).

 

#3 Don’t oversubscribe the application

Another important NFV deployment tip is to never oversubscribe a virtual application or the application’s CPU. Even though the technology allows for oversubscription of the application, this ends up degrading the performance of the application and causes problems down the road.

 

#4 NFV isn’t a plug-and-play solution

While virtualization is often marketed as plug and play, the reality is that it requires some tuning in the ecosystem for telecom applications to run at maximum performance. For example, in one NFV deployment, a customer experienced a denial-of-service attack that featured a lot of “burstiness” in the traffic. 

The DPDK driver was indiscriminately dropping packets and causing packet loss because it didn’t have any concept of quality of service (QoS). This required modification of the driver to avoid latency and packet loss. While this may seem like a minor detail, it can have a major impact on performance.

 

#5 – Redundancy needs to be built into the application and not just the NFVI architecture

In the enterprise world, redundancy is a relatively simple matter of spinning up a new VM when one VM fails. This works well for stateless, transaction-based applications, but telecom applications are stateful. When you lose the state of the VM, you lose the service. Also, when a VM fails, the time it requires to spin up a new VM is far too long for telecommunications applications and extends the problem of service disruption – a challenge for many service providers. 

In order to provide stateful redundancy in a telecom environment, operators cannot rely only on NFVI redundancy; statefulness needs to be built directly into the virtual application itself or maintained in an externalized database. That’s the approach we took when building our virtualized EPC solution, and it is a very important lesson to remember when talking about NFV.

 

#6 – Telecom applications require built-in load balancing

One of the main benefits of a virtual environment is the ability to scale up or scale down your processing power as workload demands change. When decommissioning a VM, however, you lose the state of that VM. In an enterprise environment featuring stateless, transaction-based applications, this is not an issue—but it is an issue in a telecom environment where stateful applications are the norm. 

Telecom applications that support dynamic scaling need load balancing; this way, when new resources are available, the application can load-balance across the new resources to prevent dropping service during a call/session. We believe load balancing should be built into the application, as the application knows better how to use the resources than an external load balancer.

 

#7 – VMs need to scale independently

NFV vendors need to be thinking about scalability before they build their solutions, not after. Specifically, vendors need to ensure that their VNFs can scale independently across different dimensions. In a telecom application, the data plane, management plane and control (i.e., signaling) plane each need to be scaled independently to avoid paying for stranded capacity. 

In a blade-based architecture, the signaling, data, and management capacity are added in fixed ratios; as more signaling capacity is needed, more blades are added. The result is that service providers end up with more data capacity as well, whether they need it or not. In a virtualized architecture, independent scaling is supported, meaning providers can scale up signaling capacity without affecting the data or management dimensions. This is the reason we chose to decompose each plane when we built our vEPC.

Applications need to be designed in a flexible way, allowing the scaling of VMs based on the specific call model or application (e.g., IoT, enterprise, consumer) and the availability of resources. By doing this, service providers can right-size the capacity for the specific call model.

 

#8 – Ownership is important

Service providers have traditionally relied upon their vendors to provide all the layers of a solution. NFV architecture is different. There’s a hardware layer, a hypervisor layer, and an application layer to consider, with each vendor bringing their own perspective to the solution. Instead of one finger to point when things go wrong, service providers must now point several fingers. This creates a challenge for providers in managing NFV deployments, as there is no clear accountability. 

At Affirmed, we’ve encountered this problem by taking “ownership” of the NFV experience and ultimate responsibility for the way our vEPC solution behaves in the NFV infrastructure (NFVI) environment. Our customers appreciate having an experienced vendor as a lead implementor who can work with ecosystem partners to resolve any issues.

 

#9 – One EMS is better than two (or three)

Service providers are accustomed to a single element management system (EMS) that displays the state of the system (e.g., alarms, traps, etc.) across all solution layers. In an NFV architecture, however, there are separate element managers for each layer. Having an overarching EMS that extends visibility into all layers and manages them as a single pane of glass” is an important capability for any NFV architecture.

 

Take the time to learn from the leaders

Perhaps the most important lesson there is to be learned from the leaders in the NFV journey is not to wait. There are those vendors who will tell you that NFV isn’t ready for prime time. What they’re really saying is that their solutions aren’t ready yet. 

At Affirmed, we’re building virtualized solutions that give the leading operators of today the competitive advantage they need to remain the leaders of tomorrow. Our cloud-native, 5G core solution, UnityCloud, not only reduces CAPEX and OPEX but also provides the capabilities for new revenue-generating services including service automation and microservices creation.

Learn more about NFV deployments with this whitepaper we published.