The IoT Value Stack
- Published
- in Business Models, IoT
If you listen to almost any conversation about the Internet of Things you’re likely to find that it fairly rapidly degenerates into a conversation about the communication protocols. Should you use Sigfox or LoRa? How about GPRS? GPRS has already been turned off in most of the US, but it could be around for another decade in Europe. Or what about NB-IOT? Or maybe it’s better to go for LTE-M? Not to forget the new radio that will be coming along in Release 15. Or is it Release 16?
Almost all of this is irrelevant. We already have enough low power, low cost communication standards to fulfil almost any IoT use case we can think of. The problem with the overabundance of ways to transmit data is that it diverts everyone from the more important (and much more difficult) part of the IoT, which is the rest of the value stack. This is the first of two articles where I’ll explain the wider IoT value stack and why we need to stop fixating on the comms. In this one I’ll go through the basics and then, in the second one, follow that up with more detail on security, the business models and the skills you need to succeed in the IoT.
We’re only just a year away from 2020, which is when Ericsson predicted that the world would have 50 billion connected devices. The general consensus is that by “connected devices” they meant the Internet of Things, not least because their document showed a hockey-stick graph labelled “Things” with 50 billion next to it.
That 50 billion prediction was made up from three categories. The smallest is “Places” (the orange line), which were considered to be applications like facility management and infrastructure. That included current favourites like smart meters and street lights, which account for quite a lot of the managed “things”. Ericsson’s prediction of half a billion of them is not bad – we’re not going to be far off that.
“People” (the blue line) is predominantly phones and once again Ericsson’s prediction is about right. The GSMA reckoned that the industry got to 5 billion unique subscribers back in June 2017 and estimates it will hit 5.7 billion in 2020. Not all of those 5 billion unique subscribers the GSMA are counting are people, as their number includes things like cars and pet trackers, but most of them are mobile phones, so we can say that Ericsson was pretty accurate on the “people” number as well.
The problem number is the “things” – the blue line. Ericsson has expanded their definition slightly in 2014, renaming it as Capillary Networks, which includes devices behind hubs. Those could be gateways like home routers or any of the several billion smartphones out there. But we’re struggling on the numbers. China is trying to change that by investing heavily in NB-IoT, with a claim that they had deployed over 180 million NB-IOT chips by the beginning of 2019, but outside that effort most of the candidate connection technologies are struggling to get past the 10 million mark. It’s all talk, and very little action.
We need to stop talking about LPWAN
The problem in the industry is that the frenzied debate about LPWAN obscures the more important consideration of what the IoT actually is. At its simplest, the IoT is about generating data from sensors and then analysing that data in the cloud.
I cannot stress this point enough. The key to thinking about the IoT is that simple. It is about capturing data and then deriving insight from it. Everything else is just a facet of the mechanics of doing that and doing it efficiently. If you think that the IoT is mainly about hardware or comms or analytics then you’re probably heading for disaster as you’ve not appreciated the big picture.
It hasn’t helped the industry that the cellular operators have tended to approach the IoT and their role in it with the view that “mobile is the answer – what’s the question?” The problem with that approach is that they rarely see beyond the connectivity and data contract aspects of the IoT. They also have an over-inflated view of their own competence, which leads them to believe that they can do all of the rest of it. As we’ll see, connectivity is just the tip of the iceberg.
The IoT Value Stack
Which brings us to the overall IoT value stack. The diagram below breaks it down into its five main parts:
For a successful deployment you need to get all five right, and all five parts are equally important. If we start at the bottom, we have the hardware – the “things” which will get deployed. To get data at scale you need lots of them, so they need to be cheap to make and cheap to install. The Holy Grail is to make them for a few dollars, ship them in a Jiffy bag and as soon as you take them out and attach them to something they will start working and transmitting data. That ties in with the Deployment box at the top, as you don’t want to have to employ hordes of people to go and install them. This introduces what will become a common theme, which is that the different parts of the stack need to work in unison. In this case, the need to make deployment simple and cost-effective places requirements on the hardware design, which in turn affects the choice of connectivity and IoT infrastructure.
Moving up the stack, connectivity appears as a separate layer to hardware as it is still a very different discipline. As I’ve said above, it’s the item which dominates most conversations about the IoT, but hopefully, not for much longer. In five years’ time it will probably have been absorbed into the IoT infrastructure layer, but for now it’s best to treat it separately. The IoT infrastructure layer is one that’s often forgotten or assumed to happen automatically, especially if you’re talking to someone who’s promoting a “platform”. It’s about managing your assets and making sure that they continue to work. It useful to think of it as the “Device Life Cycle” – monitoring the performance of every sensor that you’ve deployed; ensuring that they remain secure and updated; checking that their batteries are working OK; that the network connectivity is reliable; that your cloud services are doing what they should and analysing and reporting back if anything changes. Essentially, it’s monitoring the health of your IoT network so that you get back as much data as possible throughout the life of the deployment, as well as being able to trust it.
Only once you’ve got all of that in place can you begin to work with your data. It needs to be cleaned and verified before you feed it into your business intelligence software, run your algorithms and start to extract the insight which you hope will justify the cost of the whole deployment. You need to manage all of these layers if you’re going to reap the benefits.
There shouldn’t be any surprises in those five layers; anyone who has been involved in an M2M or connected devices project will recognise them. The difference with the IoT is that the relative cost and complexity within each of these five layers have changed. That’s happened because of scale. As aspirations evolve from hundreds of sensors to millions and potentially billions of sensors, scale means that costs for hardware, connectivity and deployment come down. More sensors mean much more data, hence the growth in interest in big data. That allows far more complexity to be put into algorithms, which can be further enhanced by adding in data from other sources. Unlike M2M applications, the IoT needs a lot more expertise at every level if that scaling is going to work and you’re going to extract value. But the rewards are commensurately greater. It also needs much more cooperation in the detail of every layer if the whole stack is going to be efficient. To understand that we need to dig into the detail.
The IoT building blocks
The diagram below breaks the five IoT layers down into more detail, highlighting the main components and expertise required for each one.
Let’s start at the bottom, which is where we capture data with low cost sensors. Hardware costs have plummeted, largely driven by the growth in smartphones which has pushed down the cost of radios and sensors. Today it’s possible to design LPWAN based sensors which will run off batteries for ten years for an overall cost of just a few dollars (assuming you’re manufacturing them in the hundreds of thousands). Possible, but not necessarily simple, as it requires designers and firmware engineers who understand low power, comms and how to design for manufacture. Those are surprisingly rare skills, as that level of optimisation is generally learnt, not taught.
If you’re working at the IoT scale, hardware can’t be designed in isolation from the rest of the stack. Nor is it likely to be possible to use standard products. The hardware needs to support the data formats which have been specified by the data science team, who should be aware of what effect their requirements have on hardware cost and battery life. It should support remote management and firmware updates so that you can reconfigure the data gathering as you learn which data is really valuable and it needs to allow low cost provisioning and potentially auto-configuration of the sensors so that the deployment needs no skilled intervention. Once again, that means different departments talking to each other, employing an iterative design process that allows each layer to be optimised.
Provisioning goes hand in hand with comms and is about how you connect your sensor to the network and start sending data. Mobile operators and phone manufacturers have trained us to waste an inordinate amount of our time inserting SIMs into phones and then setting up the phone. If you assume that the average person spends six hours setting everything up on their smartphone and then scale that up to 20 billion sensors and you’re looking at around 60 million man-years of installation effort, which would keep the full-time working population of the US busy for half of the year. That’s not practical. We need to dispense with that approach and design sensors which wake up, connect automatically to the network to authenticate themselves, auto-configure themselves and then get on with the job of capturing and sending data. That’s still a rather immature science, although eSIMs are beginning to appear in comms modules and offer promise. Here there is good news, as companies like 1nce are offering some innovative tariffs for eSIMs with auto-provisioning, whilst Aptilo offers seamless Wi-Fi connection (as long as you’re within range of a Hotspot 2.0 access point). However, we’re still in the early days of sorting out automatic provisioning at scale. Which means you’re going to need some bright engineers who can help work this out.
You’ve now captured the data and sent it into the cloud, but don’t get too excited, as you shouldn’t be rushing to do anything with it. As well as the primary data which you need for your business insight, you should also be capturing data about how well your network is working. That means monitoring the battery status, the quality of the comms link, which network your sensor is transmitting on, whether there are any tamper events, etc. – all of the things which tell you whether everything is working as you expect. Equally, at the server end, you should be checking to see which sensors have missed a transmission slot, if they’ve sent the expected amount of data and whether the data is broadly what you’d expect. If anything is anomalous, you need to work out why and fix it, preferably by taking advantage of the capability you built into your hardware to reconfigure and update the firmware in each sensor. You also need to ensure that any security holes that crop up (and they probably will as someone cracks some part of the many elements of the overall security chain) can be addressed by applying an update which is deployed to and confirmed by every sensor. This is the first point where your algorithms become important; not to look at the sensor data you want for your business insight, but to check that your network of sensors is behaving the way you want them to and that they keep on doing that. IoT analytics is so often forgotten, with the result that the data flow from sensors diminishes as they fail or become degraded, their batteries expire prematurely or their network connections experience problems. That either results in a loss of reliable data or a need to constantly replace them, which becomes costly. Make sure you pay attention to the IoT analytics function if you want a long-term, reliable stream of data.
By this point you’ll have chosen your cloud service for data storage and are almost ready to start work on the sensor data. But not quite. Within the industry the general consensus is that only around 20% of the data that is captured is robust. So, the next stage is data cleansing and validation to make sure you’re not trying to generate insight from rubbish. I have seen numerous cases where data science teams have come up with extraordinary insights into how something is performing, only to discover it’s a result of calibration drift, mixing up units, or forgetting about daylight saving time. The good news is that if you start off by getting the data science team to sit down with the hardware and firmware teams to consider potential problems and edge cases you can make the data robust to the point where you can trust almost every data point. With a little bit of edge processing you can flag anomalous data before it’s sent to the cloud and process it accordingly. It’s not difficult, but it’s rarely done. Doing it properly can reduce the amount of data you send over the air, yet considerably increase the quantity of useable data that you receive.
It’s only now that you can start to generate the insight from your data. Most of the stuff I’ve described above is generally forgotten or ignored, as people rush to generate pretty dashboards. Sadly, a lot of the real work on insight is also ignored as pretty dashboards appear to be what many believe the IoT is all about.
The foundations for generating insight start off with the same applications that you use to run your business; the standard productivity software you use, including cloud and databases and the vertical applications for your particular market, which you are either already using, or have commissioned for your IoT framework. IoT applications aren’t something magical – they’re built upon what your industry already uses.
Many IoT applications rely not just on your own data, but on external data as well. That may be weather forecasts, financial predictions, demographic data, the forward price of bananas – whatever makes sense for your business. It may be in multiple formats: time-series, structured or unstructured, event based, natural language, all of which will affect your choice of database and analysis tools. A surprising amount may be free; some of it will not. But this should have been worked out, along with the data formatting and how to integrate it when you started designing your project.
Which brings us to your insight development, where you aim to extract value from your data. You may know what you hope to find at the outset. It’s equally possible that you start with the ability to obtain large quantities of data and only begin to analyse it once you have some massive data sets. There are good examples of both approaches, where companies have been successful. It is always useful to have a good idea of what you’re looking for, but it equally important to have an analytics team who are open to innovation. I’ve always felt that the best data scientists are the ones who can tell stories as well as doing the maths.
Don’t be afraid to play with your data. There are an increasing number of easily accessible analysis packages that let you do that, even if you’re not a hard-core data scientist. I particularly like Microsoft’s Power BI, not least because it’s free. Let people with ideas have time to play, as the unexpected insight can be the most valuable.
Finally, you need to manage the project and deploy it. Historically, deployment and maintenance can be one of the most expensive parts of any remote sensing project. It’s why it’s important to oversee the project as a whole, to make sure that those costs can be minimised by good design.
There’s a lot to take on board, but if you don’t, there’s a real risk of failure or not getting the return on investment. So much of what we read about the IoT is limited to small-scale trials where a company deploys a few hundred sensors without really knowing what to do with them. The IoT should not be a bandwagon to keep consultants in work, or a poster project – it’s a critical element of a company strategy, which needs to be approached as a coherent entity. If any part of the value stack is missing or weak, it can threaten the whole project, which is why it’s so important to understand every part of it and how they interact. In the next part, I’ll look at what that means for an IoT project – what skills you need and how to plan an IoT project.
We need to view IoT stacks as we would living organisms. How are they both generative and sustainable; not leading to one-off siloed solutions? This requires a complete rethink of the network paradigm and how supply and demand is cleared ex ante. What’s completely missing is a discussion of settlements (between actors, endpoints, layers, boundaries, etc…) as both incentives/disincentives and a means to equilibrate the (geometric) value captured at the core with the linear costs borne at the edge.
Graham,
That’s a very good point. I have omitted the control side, partly because I see it as a second phase of IoT. I have to admit to being slightly anal in my definition, which is that in its first phase, the IoT is about data insight, as we need to get that right. At the moment I’d argue that most of the control applications we are deploying today are still largely M2M applications, as they don’t rely on insight, but are more IFTTT.
I intend to revisit control in one of the next articles, partly to try and start discussion on the conflict between comms latency and autonomous operation. When you bring the IoT into the equation of connected devices I’d argue that it should be to guide local operation rather than be the sole manager, as everything needs to continue working when the comms go down.
As usual, Nick, a very insightful article, most of which I agree with. But I feel you are only considering a subset of IoT use cases. IoT is not just about sensors, and certainly not just about mobile sensors.
IoT includes smart devices (close to 100% of TV’s will be “smart” in a few years, and household heating systems will follow over, say, 20 years). More importantly when considering scale, it includes actuators: the opposite direction from the sensors and the “things” that can make changes, not just observe. And most “things” (whether sensors, actuators or smart devices) will be connected to power, and possibly to wired connectivity. For example, a very large IoT market will be industrial/enterprise security — with every door lock in business premises connected both for sensing and actuation. Another big part will be remote site actuators: for example controlling valves for water on large farms.
I think you should explain that your analysis is only, so far, for the low-power sensor subset of IoT. I look forward to seeing the same sort of insights into the other segments, where I feel there may be much stronger value stacks.
Disclosure: I work for a vendor in the industry, but these views are my own, not my employer’s.
No net – No IoT. LoRaWAN and SIGFOX are no alternative for GPRS or even NB-IoT or LTE-M. SIGFOX offers no indoor coverage in my district, no indoor coverage in 2 neighbouring districts and no coverage at all (not even outdoor) in Schwarmstedt. LoRaWAN is not available in any of the mentioned districts in public networks indoor or outdoor. There are 3 GRRS networks and 2 NB-IoT networks for this purpose. Which networks will people use now? Now I have invested in a reference design on NB-IoT, LTE-M, GPRS with over 20 bands. My reference board akorIoT SensPRO with open interfaces like a PLC processes the code of the customers (Arduino SDK or C-Complier). LoRaWAN and SIGFOX talk about IoT. But it is compared with the techniques of 3GPP. harald.naumann (at) lte-modem.com