By Antoine Clerget on Tuesday, 23 February 2016
Category: Virtualization

Virtualization means changing the approach to proof-of-concept projects for both Carriers and Vendors

OneAccess CTO, Antoine Clerget argues that vendors need to radically re-think their approach to PoC projects as carriers begin to plan their transition to software-defined network functions. 

Until quite recently, when Telcos wanted to evaluate the different vendor’s technologies needed to build out new service platforms, the process was relatively straightforward. Typically, it meant plugging a box into the lab network and running the relevant functional and performance tests and then, assuming that the results were acceptable, handing things over to the commercial and legal teams to thrash out the supply contracts. Well, perhaps a bit more complicated than that but nevertheless a far simpler proposition than the one today’s engineers face when demonstrating or assessing NFV products.

Unlike when the CPE router came as a discrete device with a range of physical components on board, today individual network functions are de-composed into discrete elements and then stitched together in the virtualization infrastructure. In the new NFV environment, resources are to some extent shared with other (potentially third party) VNFs and the underlying infrastructure and VNFs run on some hardware unknown to the VNF vendor. As a consequence, moving from independent elements to a complete solution, even if it sits in a single hardware equipment, requires new types of integration skills. This means that a new and different approach is needed with a key focus being on integration, particularly on how management, functional and infrastructure elements work together in an environment where there are still a lot of unknowns.

After a remarkably short period of technical debate between the various actors in the network industry, we are now seeing a definite upswing in interest in testing the claims of SDN and NFV technologies in terms of genuine PoCs. This is especially true among those Telcos that are looking to future-proof their network infrastructures to achieve their major goals of flexibility for market differentiation and programmability for reducing costs.

As an early champion of the move to a more white-box /VNF approach to CPE architecture we see this as a natural progression, building on our existing multi-functional router platforms, which already include an extensive range of software modules for specific network functions and security. However, at the same time this has meant a total re-think on what is needed for a PoC project to be successful. With more emphasis on the need to answer questions about the interoperability of the technology in this new and highly dynamic virtualized environment, it means that our engineering teams need to take a much more direct, hands-on involvement in the process than was previously the case.

This is reflected throughout the PoC process, both in terms of having one or more virtualization experts on the ground with their sleeves rolled-up, as well as with remote support from engineering back at base. With many Telcos on a steep learning curve as far as how best to adapt their network architecture to this new virtualized world, it is particularly important to be able to demonstrate how the technology can work together with an array of different systems, both virtualized and legacy.

This, combined with the evolving nature of the technology and products, means it is difficult to be able to anticipate what questions and scenarios will arise as the PoC progresses. Telco teams are often assessing multiple technologies and products within a short period of time, which inevitably leads to a lot of spontaneous and what-if scenarios. For instance: ‘oh, you support that feature in the DPDK; can we try it and see if it leads to a performance improvement’ or ‘now that we have seen your vCPE working, can we test and see if we can chain it using NETCONF with these two functions from other vendors’, or ‘can we see which combinations of functions can actually be chained on a 4-core ATOM as opposed to this Broadwell platform we just received yesterday’?
A key question is with regards to integrating third party virtual elements (VNFs), which can theoretically be done with almost no effort. As a consequence, it is always tempting in a POC to explore new use cases. As a consequence, you may end up with very complex scenarios working very quickly, but may also be suddenly confronted with issues whose resolution require may different stakeholders and expertise.

Most hardware equipment today guarantee performance and behavior given some "typical" working conditions (mostly the nature of the traffic, such as packet size, e.g. IMIX). Based on these, one can quite easily predict the overall performance and behavior of a system that integrates various physical elements. When it comes to VNFs, the performance and behavior of the network function depends on the resources allocated to it at one point in time. There is thus a need to broaden the concept of "typical" working conditions - from the nature of the traffic to the resources, hardware capabilities, and their dynamics. Until this is stabilized and standardized, it will remain difficult to anticipate, test, and certify the "performance" or the "density" of the various components/VNFs that make up the system.

What is clear is that in most cases, the testing experience itself leads very quickly to new options and challenges and this can move over time as new vendors and new versions of products come into the labs. To deal with all these curved balls, we’ve learnt that it pays to have our experts immediately on hand. Indeed we often learn as much about the state of the technology as our customers do!

While this is obviously a major commitment, our experience to date has proven this to be a worthwhile investment and appreciated by the Telcos involved. For them, going through the PoC process is also a major commitment that can tie up their key people for several days for each vendor solution they want to evaluate. If they have to wait for answers to arrive via email, maybe several hours or days later, it can mean that in the meantime any focus or momentum has gone, leading to lost time and assessment delays.

Fundamentally the whole industry is in the midst of a massive learning curve and virtualization demos and POCs are revealing how dynamic and untested the various de-composed technology elements are. When dealing with new, market-disruptive technologies it is even more important that all the necessary resources are made available to not only prove the technology actually works as claimed, and adapt it if not, but also to demonstrate the depth of knowledge and support that will ultimately be needed when the project moves to production.

Vodafone ’s third annual M2M Barometer survey has confirmed that businesses are embracing M2M technologies faster than ever before. Over a quarter (27 per cent) of all companies worldwide are now using connected technology to develop and grow their businesses. In particular, the retail sector, together with the healthcare, utilities and automotive industries are all moving to maximize M2M’s potential. The returns are substantial: 59 percent of early adopters reported a significant ROI on their M2M investment.

Despite the market buzz, many operators and CSPs are yet to zero in on the most profitable and operationally efficient way to support this new wave of industrialized connectivity. Not least because the range of possible M2M use cases is vast. The diversity of devices being connected, their whereabouts, the conditions in which they operate and the amount of data they produce all impact on the CSP’s choice of supporting network equipment. One key commonality, however, is that all deployments require a connectivity infrastructure capable of aggregating, securing and backhauling M2M data in a cost-effective, fast and reliable manner. 

As the number of connected devices skyrockets, the ability to offer a range of traffic management services will be a clincher for operators and CSPs looking to gain a foothold in this market and differentiate their offerings. The good news is that many of these can now be delivered via the CPE, without the need for additional devices. Establishing always-on connectivity is of course vital, but the ability to provide a robust business continuity failover to LTE could also prove attractive to customers for whom any amount of network downtime is harmful, no matter how small. Network monitoring and dynamic traffic routing software managed via the CPE can also be used to support traffic throughput at peak load times.

Before the M2M market can reach true maturity, however, fears relating to data protection and security must be assuaged. Given the limited processing power of M2M’s connecting sensors – which are incapable of performing heavy duty computational functions such as encryption – the opportunity here is in the hands of CSPs and, again, the CPE can help.

By using it as a managed service delivery platform, CSPs can aggregate all of a customer’s M2M data from across their site and provide encryption for the backhaul as part of their service offer. OneAccess has already supported a variety of operators with a diverse range of deployments, including:

Connected speed cameras: field data transmission from fixed and mobile speed cameras to the Traffic Offence Processing Centre in France.

OneAccess was tasked with creating a network capable of transmitting data collected by speed cameras across France to a central processing centre. The key objectives were to minimize the costs of both access provision and maintenance, and to ensure the security of the data transmissions. 

Water towers: telemetry data on water volumes and flow.

Technically an industrial internet deployment, OneAccess designed and implemented a real time data transmission to monitor the volume and flow of water, as well as valve positions, through a series of water towers and underground distribution units in Belgium, which transmit this telemetry data using a network served by ruggedized CPE routers.

Smart bins: Intelligent monitoring of trashcans in Ghent, Belgium.

OneAccess was tasked as part of a European program, whose goal is to optimize waste collection, with creating a wireless network to transmit the data from sensors installed inside trash cans to alert the administrating organisation when the trash cans required emptying.

Many of these ‘hidden’ M2M revenue opportunities are already out there. As is so often the case, it is the enabling qualities of multi-service CPEs that can help operators to capitalize on them.

Related Posts