Wanted – the Infrastructure of Things
- Published
- in IoT
The Internet of Things has a problem. Unless we start looking at a new infrastructure, it may peter out after the first fifty billion devices. Everyone seems to be so excited about predicting whether it will be 20 billion or 50 billion or 1.5 trillion that they’ve forgotten about how the connectivity and business models will scale.
There’s a general consensus that we’ll get to between 25 and 50 billion connected devices by 2020. The first 25 billion of these is foreseeable. Around a quarter of it will come from personal devices – mobile phones, tablets, laptops, gaming devices, set-top boxes and even cars, using cellular or broadband connections. They need moderately expensive broadband contracts, but we’ll pay as we can stream lots of data. The same again will come from machine-to-machine (M2M) connections where broadband or cellular connectivity is embedded in commercial products to monitor their performance. That covers everything from telematics, connected medical devices, asset tracking, smart buildings and everything from vending machines to credit card readers. In this case the service contracts are justified by improved business efficiency.
The second 25 billion is likely to come from locally connected devices – generally personal products which connect to smartphones. Eighteen months ago I wrote a report on these appcessories, predicting that they could grow to an installed base of around 20 billion in 2020, getting us close to the total of 50 billion. These will piggy-back on existing broadband contracts, so most won’t have a service model. At best, there may be an opportunity for selling apps or subscription services.
However, at that point, future growth may start to slow. Although these products all get referred to as the Internet of Things, they’re only that in the loosest sense, as they rely either on personal user setup, or professional installation. Both are time consuming and a barrier to ubiquitous deployment. To achieve the real Internet of Things we need products which can be taken out of their box and which connect and work autonomously. Without that, we’ll never get past the tens of billions. Despite all of the IoT hype around, no-one is really addressing the hole that needs to be filled. We need an Infrastructure of Things – a new Low Power, Wide Area, end-to-end wireless Network (LPWAN), along with a new approach to data provisioning for life. This article explains why and what the options may be.
The “why?” is pretty obvious, but generally overlooked because the limited number of Internet of Things products on the market today are mostly bought, installed and maintained by geeks. Most wireless or connected products do not work out of the box. You need to read the instruction manual, type in passwords and go through a moderately non-intuitive set-up process. That’s true even for the best designed products. If you then change your router or smartphone, the Internet link will stop working, requiring the user to repeat the process. It’s a workable model for a limited number of high-end, high value, desirable consumer products. But it’s limited.
Many of the Internet of Things products which are envisioned are low cost sensors, where the cost in deployment will need to be minimal. Essentially, those installing them will need to attach them to the appropriate surface and turn them on, at which point they’ll start working with no or minimal further user intervention. They should work anywhere – outside, or deep within buildings, with no need for site surveys or optimal siting. In other words, they need to work in the way that today’s products don’t. It should be equally possible to implement the technology inside commercial things like environmental sensors, bridge monitors and smart meters, as well as domestic ones like household appliances, dog collars and smart clothing. To achieve that we need a technology and network which meets the following criteria:
- Low Cost. Once it is shipping in the billions, it should be possible to implement it within products for around $1.
- Low Data Rates. This is not a high or medium speed requirement – it is for devices to report changes or updates. Most will send less than 1 kB of data each day. It should be possible to increase that, but with the proviso that this is never going to be for audio or video – it’s about data events.
- Low Power. Which is linked to the low data rates. Simple sensors should be able to run off small batteries for 5 – 10 years. It should be possible to use energy harvesting for the lowest powered devices.
- Secure. With device security managed by the network, so it can be updated.
- Long Range, so that base stations can support thousands or tens of thousands of devices. That’s necessary to keep the infrastructure costs low, so that data contracts can be as low as a few dollars for a lifetime connection. It also requires a much better link budget than any current cellular solution provides to allow the low power consumption in terminal devices.
- Simple, low cost provisioning. It should support a base level of data provisioning for life, possibly in the chip license, so that devices ship with an ability to connect throughout their working life.
- Flexible Service Provision. A standard way for the operator to allow devices or customers to negotiate enhanced level of provision after deployment.
- Quasi-real time. Many applications don’t need real-time access, particularly if they’re only doing daily reporting. But some may, particularly if control signals are being sent to devices.
If you listen to cellular operators, they’ll imply that not only can they do this already, but also that cellular is the only route to enabling the Internet of Things. That’s largely disingenuous. Cellular is too expensive, whether you’re considering power consumption, cost of hardware or service contract. Operators are busy dismantling their 2G networks, which is the only infrastructure they have that is vaguely suitable for any IoT application. The proposal that a future low cost variant of LTE is the answer has about as much understanding of the requirements listed above as Marie Antoinette’s pronouncement that the starving populace should eat cake.
What’s surprising is how few alternatives are being developed. Perhaps the best known is SigFox. This is a narrow band, one-way network which is being rolled out in various parts of Europe in the 868MHz ISM band. It provides a low rate uplink using a proprietary protocol.
Another contender was Weightless, which is a standard developed by a consortium of Accenture, ARM, Cable & Wireless, CSR and Neul, designed to operate in the TV White Space bands. On paper it probably comes closest to the principles outlined above, but requires major changes in spectrum regulation. That has proven to be an impossible task in standard technology development timescales. Although there is no reason why Weightless could not be used in licensed portions of the spectrum it has lost momentum and may well fall by the wayside.
Telensa is another proprietary standard from Plextek in the UK, which has recently been spun off as a company in its own right. Its primary market at the moment is for street lighting. The radio is moderately complex and unlikely to meet the price points required.
Matrix has been developed by TTP and is deployed in a number of proprietary solutions for their clients. They have recently made some public statements about the technology and it could serve as a possible option or starting point if taken up by a standards organisation.
One of the few US offerings is On-Ramp, an 802.15.4 based point-to-point solution operating at 2.4GHz. The company claims it can cover 400 square miles with 16,000 end points from each access point, with a range of 10km, as long as the access point is raised on a tower, building or hilltop.
Semtech’s LORA is another technology that could be part of the solution, which ticks a fair number of the boxes. It’s a sub-GHz solution that’s promoted as ultra-long range, with a claimed link budget of 160dB. Semtech own a fair amount of IP in this solution, but have a business model of embedding it into their chips, which may be limiting.
There are a number of other solutions being offered, mostly proprietary ones that have been developed for street lighting, parking sensors or smart grid applications. None really give the impression that they’ve had the depth of thought or commitment to standardisation to evolve into a universal IoT LPWAN.
It’s worth mentioning Thread, the recently announced wireless standard from Google and Nest. It’s not a wide area IoT standard – it’s a local mesh. However, from the details which are emerging, its development appears to have considered many of the issues above, so it could be a useful contributor to the debate.
Then we have the various LTE-M and other LTE-Lite variants. The telecoms industry likes these as they perceive them as an evolution from what they’re already doing. But they suffer from their heritage. However much they’re tweaked they’re never going to meet the power or cost levels required – there’s just too much overhead in LTE. It’s like trying to take an internal combustion engine and claiming it can be slimmed down to become a running shoe. What the industry needs is a clean sheet of paper approach. Although the final solution will almost certainly need to coexist with LTE and be deployable by existing network operators.
Provisioning, Provisioning, Provisioning
With my clean sheet of paper I’d start from the opposite end to every LPWAN proposal I’ve seen, which means provisioning. As we move from 50 billion devices to hundreds of billions we need to find a way for them to work as soon as they’re turned on. At these numbers we’re talking about tens or hundreds of device for every human being, which is why it is inconceivable that anybody is going to be involved in configuring individual devices. They have to turn on and connect – no SIM card, no setup codes, not even pushing a button to connect, they just work. Moreover they must continue to connect for as long as they work. That is why they need to connect directly to a base station without any hub, router or smartphone in the way, as any intermediate device will cause complications if it’s changed.
The corollary is that every device must be pre-provisioned, probably at the point the chip is manufactured or programmed. That pre-provisioning should allow a device to connect to the network at power up and register itself. Depending on what contract the product manufacturer has set up with a network provider it will then negotiate a data service.
I’d suggest that every device should be provisioned with a lifetime minimum data service, probably of around 100 bytes sent once per day for as long as it works. Once registered, the product manufacturer can access that daily data. Depending on what contract they’ve set up, the network provider could then instruct the device to transmit more data or transmit more frequently. That could be for life, or on demand, such as when a fault is detected, or if a software update needs to be sent to the device. This probably means that there needs to be some form of central licensing authority.
Secondly, spectrum. My view is that the IoT LPWAN network should be in licensed spectrum, not unlicensed. Whilst that goes against the grain of many current proposals, this is a network that will need to work for at least twenty years. That’s about as long as the oldest 2G data networks (longer than any GPRS network) and almost twice as long as we’ve had Bluetooth or Wi-Fi. The ISM band is getting congested and will get more congested as operators get hold of the 2.3-2.4 GHz spectrum. (Incidentally running LTE next to the 2.4 GHz ISM band will probably scupper their plans to offload significant quantities of data to Wi-Fi networks because of interference, but that’s another story.) These IoT devices need dedicated, managed spectrum if the service is going to scale to hundreds of billions and make money for the network operators. It is important to remember that this needs to make money for the network operators, otherwise it will not happen. The IoT is not a free ride as some seem to think – it needs valid business models to scale. Refarming an existing LTE channel or guard band in the 900MHz band would make a lot of sense here. Most of these devices may not cross national boundaries; the majority will probably be static, so it does not need to be global. However, economies of scale, along with a desire to minimise national variants suggest that a limited number of global frequencies be used.
Thirdly, silicon. To get things going so that they’re cost effective, the terminal solution should try to use existing chips. The past twelve months has seen a step-change in low cost, ultra low power chips which can probably be utilised, rather than waiting for new silicon to be spun. In the course of time that will happen, but the faster we can get the first generation out, the better.
Fourthly, price. I believe the target for silicon should be $1, with no more than a further $1 for other components and no more than $2 for basic lifetime data provisioning. At that level every industry can contemplate building this into almost any device they make. Unless we aim for that, then the Internet of Things is just a pipe-dream, which will be limited to vertical M2M applications and pricey toys for geeks. There are massive benefits in gathering data. Most home appliance vendors have no idea how their products are used, how often, or what goes wrong. Having that data allows them to design more effective products and develop much more interesting service models. As a simple example, it allows an all-in leasing model for home appliances which can include power and water costs. Those low dollar prices may not seem much, but multiply them across a trillion devices, and the revenue rivals that of any other industry sector.
All of this needs to be secure, which must be the foundation for all of these points. That argues very strongly for a standards body to take control of this, as the multiple inputs stand a better chance of ensuring a decent security model which covers all aspects of the solution. They also need to develop a certification scheme to ensure that everything that comes to market conforms and which encompass the whole product chain from sensor device to data access from the network servers. That is one of the biggest challenges, as most standards bodies limit themselves to one part of the puzzle, generally opening up security holes at the interface to the next standard. Weightless tried, but didn’t have the scale. The obvious choice is ETSI, but that means changing many participants’ vision of the future. It would also be beneficial if whoever develops it takes the Bluetooth SIG’s RANDZ approach to IP, as opposed to the dreaded GSM and 3G patent pools.
One of the reasons we don’t have a suitable network like this is that it’s one of those chicken and egg situations where there is little push to design and deploy anything until there is a demonstrable demand for it. Today cellular connections and personal area networks like Bluetooth and Wi-Fi give the impression that the Internet of Things is progressing quite happily. No-one’s too bothered about the bottleneck that will come in 8 – 10 years time when they have cherry-picked the high value connections, because the immediate opportunity of tens of billions is so big. But ignoring it leaves no economic way to connect the trillions of other devices. The standards and infrastructure we need will take a decade to develop. Unless we start addressing them now, things may grind to a halt at the end of this decade.
Because it needs to be nationwide and preferably continent wide, if not global, it also needs Governments and regulators to understand the need and help make it happen. And it needs the industry to stop developing piecemeal solutions and realise that there’s a bigger opportunity to be met. Otherwise the dream of a trillion connected devices will remain a dream. It’s not that there’s a lack of options, just a lack of vision and funding to develop the infrastructure to enable it.
True, and well caught. However, given the level of lies and hype in the IoT arena it’s probably one of the smaller misquotations.
Marie Antoinette never said “Let them eat cake” or anything close. It was just the propaganda of the time.
Interesting read! For the majority of your statements I agree, but (how predictable) a few need more explanation.
The provisioning you propose is valid for low quality of service (not mission critical) applications. It is like a pre-paid model, it can never be 100% secure and can be a very uncertain factor in the use of spectrum.
What is according your definition a LPWA-N? I am very curious to hear. None of the mentioned vendors provide to my opinion a LPWA-N. Since when a vendor claims to need 200 base stations for 35,000KM2 area, and in reality it becomes (not for capacity reasons) >2000 base stations we talk about a LPmWA-N.
Some others are very spectrum unfriendly and do not congest but completely block an ISM band by playing with the rules and they would do that in any band they would transmit.
Regarding spectrum I totally agree with you, it should be licensed.
I have no real view on Accellus, as I can’t find any relevant information on their web site – just an IoT promotion video for Intel, which isn’t relevant to LPWA networks.
Nick, the article and the comments are quite beneficial for me indeed. Accellus has already deployed in Holland. What do you think about their solution? http://www.accellus.net/
Best regards,
Tunca.
I had intended to include nWave in the article, having come across them as part of Weightless, but I couldn’t find evidence of any deployments to determine whether they were real or imaginary. With Neul’s departure to Huawei, ARM pulling back from Weightless and any regulatory changes disappearing into the middle distance I can see the appeal of nWave and Weightless coming together to repurpose the standard in the ISM bands. The issue they have, along with everyone else targeting that shared spectrum, is reaching scale to become the pre-eminent standard. I was intrigued to see this week that the LoRa MAC is being released as an open standard. When that happens with a proprietary spec it’s generally an admission that it’s not selling. Weightless hasn’t even got to the point that it’s shipping anything.
I’d like to see nWave and Weightless succeed, but what they need now is to get some major partners on board. Very few (if any) wireless chip startups have produced a global standard until such time as they can get some of the big boys involved. It needs that and not marketing statements if they’re ever going to get to scale.
Nick,
This is a great discussion. What is your perspective of Nwave? They just joined the Weightless initiative and seem to be replacing Neul in the consortium.
Cheers,
Ken
You have hit the nail on the head. Most of the components in the Telecel street lighting device are related to the application and not the connectivity.
Thanks. The price indications were from a response to a question at a presentation from Plextek earlier this year. The board certainly looks busier than some of the other competing technologies, but it wasn’t clear how far it could be price reduced, or which features might be removable for a more general purpose deployment.
Nick
Thank you for another thought provoking article, particularly the part about provisioning.
Just to correct a couple of points on the widely deployed bi-directional Plextek ultra narrowband technology now being exploited by Senaptic and Telensa. The device radio is not so complicated and certainly does not make the technology inherently unable to meet the target price points you mention.
Nick.. please take a look at what we are doing.. this is true cross platform wireless data unicast technology.
And there was I hoping we’d have one interoperable Internet of Things instead of a riot of them. 🙂
But good luck. We need innovation to make it work.
Redstone Infrastructure of Things is the TM. aka R’IoT
Who would have thought you could trademark the Infrastructure of Things? Although the US Patent and Trade Mark office doesn’t appear to have a TESS record for it.
Nick please review our solution to your query
Infrastructure of Things™ | Redstone Technologies
http://redstone.us.com/infrastructure-of-things/
This should give you a new understanding and everything you described we enable.
Hi Nick,
Very good article and analysis. The points you’re mentioning are behind our choice to deploy a private wireless network based on 450 MHz technology and dedicated to our smart grid and smart meter applications.
Gilles
Regarding the cost of licenced spectrum, recent bids in Germany, Canada and Australia reported a valuation of USD0.20-1.00/MHz/pop which turns into USD30M-80M per MHz for these countries.
The bandwidth required for long-range ultra-low bitrate networks such as SIGFOX is even lower than this, in the order of 200kHz, ie a valuation of USD6M-16M, very much in line with several millions of objects associated with a sub USD5.00 yearly data plan.
Given the very small bandwidth required for such networks, it may happen that legacy telecom operators eventually allocate existing band edges or even gaps between 3G/4G channels to such networks, “only” requiring a SW upgrade of their infrastructure in order to add these new modulations and capabilities to their offering.
Nick,
Very thorough article.
I strongly agree with your comments about provisioning. We have seen the impact (negative) of M2M/IoT provisioning in a number of connected home, building automation and other M2M/IoT projects even in cases with customers/users that are somewhat tech savvy, willing to put in some provisioning effort and with much smaller device quantity than the scaling issues that will arise with the forecasted 25B, 50B or 1.5T devices).
I also think your article hints at another very significant issue that needs to be addressed in order for true massive scale of IoT: spectrum . . . but there is an additional aspect that relates to your call for the use of licensed spectrum. That is the issue of spectrum cost. With governments and regulatory bodies viewing spectrum as a national asset for revenue generation, the cost of licensed spectrum for potential network operators will also be a big challenge to keep the cost very low so that low cost devices can affordably use licensed spectrum enabling billions or trillions of devices to become economically connected. So your call in the last paragraph for governments to catch the vision of enabling billions if not trillions of IoT devices is important.
I think these two issues: provisioning and spectrum (including cost) plus possibly some others significantly trump the issue you raise regarding the need for a new LPWAN as a future impediment to massive growth of IoT.
The history of telecom & networks shows that a single network protocol/interface is not sufficient for end-to-end networks. Invariably, several types of protocols/networks are used end-to-end across the local area, access area, aggregation/metro area and backbone/long-haul area . . . or other types of geo/capacity network domains depending on how granular someone might like to categorize an end-to end-network (ex: the literal “personal area” now emerging with wearable devices). Each of the protocols/interfaces used in these different domains are optimized for different metrics of the variables you mentioned (price, data rate, distance (directly correlates to power & spectrum) and quantity of connections/devices). So with the potential (still unrealized) for:
a. one or more PANs (especially mesh-type like ZigBee, Z-Wave, etc. or coming entrants like BTLE or 802.11ah) to hit the price points and data rate targets you outlined . . .
. . . and . . .
b. that a significant majority of IoT applications and scenarios will result in a pretty high quantity of end devices concentrated in/on a person, home, building or similar coverage area . . .
that those end devices can all concentrate their traffic to a “gateway/aggregator” which then has the next tier more expensive and higher data rate protocol/interface to provide the access connection into the wide area network (i.e. longer distance for typical access networks of several up to 10+ miles/kilometers). When these gateways are aggregating for hundreds or thousands of devices, the amortized cost of the gateway to each end device can be in the pennies or tens of cents.
I think a new LPWAN standard is a good idea, especially given the issue of standardization development time followed by the time for hardware device (i.e. chips) to scale to large enough volumes to hit the price points you propose (i.e. consider the time it is taking for Bluetooth to reach the volume levels and technology/manufacturing maturity stage necessary to hit the $1 – $2 price point). But I think some of the other issues you mention or hint at are going to be a more significant impediment to a trillion connected devices.
Nick,
Great analysis.
I think that moving to a licensed spectrum must indeed happen and will happen in a second round, when SIGFOX and the likes will have proven to legacy cellular operators that the business is viable, past the 100M nodes connected. This new service will be part of 5G.
SIGFOX & their modem partner TD have laid fantastic eggs in Europe, which will hopefully give birth to more chickens.
But financing nationwide deployments of such networks is expensive and requires one or two B2C killer-apps to demonstrate break-even: metering, security ?
New apps will then populate the network and make it really profitable.
I’m not sure Europe is lagging. Although there is wasted effort in a multitude of competing, proprietary technologies, most of what is happening is taking place in Europe. In contrast, very few initiatives are going on in the US. That may be partly due to geography. Europe’s denser population makes it more economical to experiment with small scale deployments. The important thing is that this then leads to larger scale.
There is hope here. Since publishing this article, GERAN – the GPRS evolution group within 3GPP has published the proceedings of their May meeting, which includes a proposal for a new study on Cellular IoT which supports most of the principles that I’ve laid out. It’s supported by Vodafone, Telecom Italia, Orange, Telefonica, HuaWei, HiSilicon and u-blox, so has a credible set of sponsors. Anyone interested in getting past the 50 billion devices should keep an eye on its progress.
Nick,
one of the few articles worth reading – I already distributed it to anyone in my company who comes with 50+x bn devices.
Monetization of the service and end2end security cannot be solved when the discussion circles around “best short range technology” or lowest cost devices.
The first mobile phones were expensive and had poor service capabilities – but they were connected to a “killer infrastructure”. Unfortunately IoT dont have an use case like ‘phonecall without a wire’ people want to pay for. Plenty of small services will have to re-finance a multitude of communication channels.
What we need is a kind of trusted infrastructure, scalable and interoperable. For this infrastructure, Identification- and Provisioning-rules as well as security methodologies are to be implemented by a GSM-A or Global-Platform like organization.
We lack this common understanding in Europe and I fear that we on the one hand waste a lot of money in semi-proprietory smart metering communication and on the other hand are outpaced by US and Asian dominant market players owning then most of the IP.
Thanks for the update. When did it go to two-way, as last time I looked it was effectively unidirectional.
Great article Nick. I just wanted to point out that SIGFOX is two-way, not one-way.
Best
Thomas (SIGFOX)