In the racy world of CPE architecture, what virtualization-hungry service providers say they want isn’t always what they need, says Pravin Mirchandani, CMO, OneAccess Networks.
Alright, perhaps ‘racy’ is going a bit far, but as the virtualization industry moves out of ‘does it work’ and into ‘let’s make it happen’, pulses are certainly starting to quicken. Not least because service providers are having to make tough calls about how to architect their management and orchestration (MANO). Many of these decisions revolve around the deployment of virtualized network functions (VNFs), via some form of customer premises equipment (CPE).
Several ‘shades’ are emerging, each with their advantages and drawbacks.
The ‘NETCONF-enabled CPE’ model emulates what we have today: a fixed number of physical network functions (note: not virtual) are embedded into a traditional L3 multi-service access router. The key difference here is that the router, as its name suggests, supports the NETCONF management protocol and can, as result, be managed in a virtualized environment. In truth, this is a pretty rudimentary form of virtualization; the router can be managed by a next-generation OSS with NETCONF and its embedded physical functions can be turned on and off remotely, but that’s about it. The device is not reprogrammable, nor can its network functions be removed or replaced with alternatives. The market for this deployment model lies in two use-cases: Firstly, as a bridging solution enabling service providers to co-operate traditional and virtualized network services simultaneously, facilitating migration. Secondly, given that many of today’s VNFs are heavy and need considerable amounts of memory and processing resources in order to operate, the more flexible white-box alternatives are costly in comparison. Specialist vendors like OneAccess have been developing dedicated CPE appliances (with embedded PNFs) for years, where compact and efficient code has always been a design goal in order to keep appliance costs under control. For more conservative operators that are keen to get ‘in the game’, the proven reliability and comparative cost efficiency of this model can offset its relatively limited flexibility. Rome wasn’t built in a day and some operators will prefer to nail the centralized management and orchestration piece before investing heavily in pure-play virtualization appliances for the network’s edge.
A purer approach is to invest in a ‘thick branch CPE’ or, in other words, an x86-based white-box solution running Linux, onto which VNF packages can be either pre-loaded and, in the future, removed and replaced or even selected by customers via, say, a web portal. This approach delivers far greater flexibility and is truer to the original promise of NFV, in which the network’s functions and components can be dismantled and recomposed in order to adjust a service offer. The snag, however is that white-box CPEs come at a cost. More memory and more processing power mean more cash. That’s why the race is on to develop compact VNFs, so they can minimize processing requirements and, as a result, enable a limited spec white-box to do more, with less. Again, unsurprisingly, those ahead of the curve are VNF vendors that have the experience of wringing every last drop of performance out of compact and cost-efficient appliances, purpose-designed for operators and service providers.
A third emerging model focuses on ‘thin branch CPE’. This is the closest to ‘pure virtualization’ as you’re likely to see today. Here, VNFs reside in the service provider’s data center or at the network edge, and are accessed centrally via a NETCONF-enabled Ethernet access device, switch or another kind of network interface device. Sure, this device may host one or two ‘micro PNFs’ (for backup link management, for example, or service assurance), but the overwhelming majority of network functions will be hosted and deployed centrally. This option minimizes CPE costs, but instead demands huge investment ‘back at the ranch’. When scaled up to support an entire data center’s worth of customers, centralized multi-VNF management takes some serious computing power, which draws yet more energy, and will markedly increase the physical footprint of each facility. The payoff, however, comes in the form of maximum flexibility, service agility and much reduced opex costs.
Which will win? None outright. At least not in the short term. The likelihood is that the market will fragment according to the differing requirements of operators and service providers. Smaller competitive carriers and OTT providers which, compared to the incumbents, need to deliver fewer and more straightforward VNFs could move to the ultra-flexibile ‘thin-branch CPE’ model quite quickly. Incumbents, however, challenged by the sheer size of their networks and operations, as well as the need to support legacy connections and a large installed base, will look to more hybrid deployment models.
Having been seduced by the promise of virtualization, today’s marketplace is now being tempered by commercial, technical and operational realities. This is likely to fracture the market and produce multiple ‘shades of NFV’. If operators and service providers intend to avoid falling victim to their own restraints, they will need to choose wisely when selecting their architecture.
What’s more, they also need a partner who can appreciate what it is that they really need, and not just give them what they think they want.