Hearables – the new Wearables.

There’s a new bubble in technology – the wristband. Fuelled by Nike’s success, Jawbone’s on the Up, Polar’s in the Loop, Sony’s trying to Force its way into the game, while Fitbit’s aiming to stay as number One. (If you’ve ever wondered how branding executives choose their product names, that’s how.) Analysts are falling over each other to estimate how large the market will be by 2018. They’re wetting themselves at the prospect of smart watches, seeing the wrist as the saviour of the high tech industry now that smartphones have lost their Shine. (Which has nothing to do with the wrist, but that’s another story.) Currently Credit Suisse holds the prize for unwarranted optimism with a prediction of a market value of up to $50 billion for wearables in 2018. I think they’ve all missed the largest potential market for wearables – a category I’m going to call Hearables. The ear is the new wrist.

Analysts making these predictions almost invariably assume the wearable market is intrinsically linked with the smartphone market – currently around a billion units per year and worth over $250bn. To them, wearable seems to be mostly about smart watches and phones which extend small parts of the phone experience to something we wear. They ignore the fact that we still purchase smartphones to make calls. All of those calls send audio to our ears. As well as voice, hundreds of millions of people use their phones for music, as evidenced by the ubiquitous cables trailing from ears. Sound drives the bulk of our technology use and earbuds are the only piece of wearable tech to have gained ubiquity and social acceptance. These devices are about to undergo a revolution in capability, getting rid of their cables and giving them the opportunity to be the standard bearer for wearable technology.

I’m currently writing a new market forecast report for connected consumer wearable technology. It argues that the biggest potential market for connected wearables will not be for devices we put on our wrists, but the ones we put in our ears. By 2018 it suggests that we’ll be spending over $5billion on Hearables. Let me know if you’d like a copy of the report when it’s complete.

Read More

Wearables – Novelty or Necessity?

This year at CES – the market defining Consumer Electronics Show in Las Vegas, the general consensus was that 2014 would be the year when wearable technology took off. There were some dissenting views – some thought it would be the year of Home Automation, particularly when Google went and splashed out $3.2billion for Nest, whilst some felt that maybe the Internet of Things would move from being a playground for geeks to a mainstream business. But if you look at what is actually going into production, it looks like wearables was the best bet. That was reinforced at the Mobile World Congress in Barcelona. Phones and hardware were distinctly passé, the excitement was in what you could wear.

The reason the industry is having this debate is that it’s entering new territory. It’s generally agreed that we are in the post-PC era. Whilst PCs and laptops aren’t dead, they’re a declining market. It’s a long time since anything happened that made people go out and buy a new laptop. Most people don’t change their laptops any more. They wait for them to die and them replace them, just like light bulbs and fridges. Smartphones are facing the same fate. It’s no longer cool for most people to have a new smartphone and there is increasingly little to differentiate between them except price. They’ve reached that plateau where the only thing for manufacturers to do is to make them cheaper and try to saturate the market. Even tablets are entering the same territory, with Android tablets turning them from must-have accessory into commodities.

It’s a problem both for big brand names and also for the ODMs and factories which are churning them out. The latter are desperate to find the “next big thing” to the point that they’re courting new startups for small scale production runs which they would not have countenanced a few years ago. Wearables are the top of the list as the next desirable product. The question is whether that reflects reality, or whether the industry is at risk of believing its own PR?

Read More

Why Google swapped Motorola for Nest.

You can normally expect CES to set the technobabble agenda for the first few months of any year, but this year, despite the hubbub about Wearable Devices, Home Automation and the Internet of Things, CES played second fiddle to the much bigger announcement of Google paying $3.2bn for Nest. On the face of it, it’s an outrageous amount to pay for a hardware company. The announcement a few weeks later that they were selling off Motorola Mobility for around the same amount made it look like some giant game of cards.

The initial reaction to the Nest acquisition from many commentators was that this just validated their view that CES was heralding the start of the Home Automation market. Long term players, like Control4, saw their share price rocket overnight. Most start-ups in this space have been complaining that their VCs dedicated their next board to persuading the CEO to redesign their business plan in the hope of emulating the Nest sale. On the other side, doomsayers predicted the demise of anyone else in the field as the Mountain View leviathan would take over Home Automation.

Most of what has been written feels like a knee-jerk reaction. What surprised me most was the speed with which Google rushed to say that it would not be using the data from Nest’s thermostats. That announcement came too quickly to be a reaction to media speculation about the data behemoth’s new-found ability to learn even more about users. It made no sense given Nest’s purchase value. It was like a top restaurant hiring Heston Blumenthal and then saying they were only employing him to go out and collect their lunchtime sandwiches. Google is about data. Why pay $3.2bn and then throw the associated data opportunity away? It implies that there’s a lot more behind this acquisition.

Read More

Investing in Wireless Standards, or 802.11ad – Mad, Bad and Dangerous to Know?

Investing in a new wireless standard can be an expensive experiment. The investment can be vast, as I’ve described in a previous article on the cost of wireless standards. It’s not unusual for the combined cost of writing and bringing a standard to market to run into billions of dollars. When a standard loses out to a competing one, it’s a heavy loss both for the VCs who have invested in it as well as the companies who have worked on it. The problem is that there’s not been a good way of determining in advance which standards will succeed and which will fail.

Up until this point, the only real yardstick has been the Intel test. That’s the principal that if Intel invests heavily in a wireless standard (think HomeRF, WiMedia or WiMax), then the standard will fail spectacularly. Conversely, if Intel withdraws its development effort from a wireless standard, as they did in the early days of Bluetooth back in 2002, then the standard will be a roaring success. The Intel test isn’t a perfect one – it fails to predict the acceptance of Wi-Fi, but with a track record of four predictions out of five, it’s a lot better than just flipping a coin.

What the industry needs is a new test. I’m going to suggest the Byron test. It’s a more literary approach, suited to the alphabet soup of the 802.11 family of wireless standards and inspired by the popular description of the romantic poet as “mad, bad and dangerous to know”.

Read More

London. The Nexus of Big Data and Data Science

Over the past few years I’ve been working more and more with the large volumes of data that come from M2M and the Internet of Things.  It wasn’t that long ago when “Big Data” was a novelty that was largely a vision of the future – more talked about than done.  In a few short years it’s morphed into the “next big thing” that everyone needs to have and which will save our planet and our health systems.  Of course, Big Data itself is of limited use.  What changes the game is the insight which can be extracted from it.  That’s why the headline description of big data can be unhelpful. By concentrating on the “big”, it places the spotlight on the mechanics of database structures, diverting attention from the real skills that the industry needs to make it valuable.

I’d like to share some things I’ve learnt from my experience working in this area.  The first is the continuing hype.  When I put together a conference on the use of big data at the Cabinet Office last year I was hard pressed to find anyone really doing it commercially – the hype was still far greater than the practice.  I don’t think that much has changed since then. We’re still on the lower, gentle slope of the Gartner hype curve.  My guess is that the only companies making significant money from big data at the moment are conference organisers and consultants.  But attention is being paid.

The second is the type of skills we need to cultivate.  We talk about Data Scientists as the new breed of practitioner, but that’s largely a self-invented title from data analysts who want more recognition.  Extracting value from big data, or broad data if you want to be more accurate, is more than that.  The best definition I’ve heard is that it’s about telling stories with Matlab.  It’s not about Hadoop or Cassandra – they’re just the mechanics. The reality is that Big Data needs to be about Data Storytellers if it is going to be transformational.

The third thing is that this is something we do exceedingly well in London.  Other places may collect more data, build bigger server farms or invent more capable database structures.  But we tell better stories.  So if you want to generate value from big data, London’s the place to set up your business.

Read More

Smart Metering is FCUKED

Having delayed the Fiendishly Complicated United Kingdom Enduring Deployment** of smart meters earlier this year because of technical delays, you might expect that the British Government would have spent some time reviewing the technology they had mandated.  If they had done so, it would have become clear that the program was out of control.  Under the surface, too many cooks have ratcheted up the technical complexity to the point where it is no longer fit for purpose. However, it appears that no-one wants to point out that the Smart Metering Emperor is stark naked.  That’s largely because those overseeing the programme don’t have the depth of technical knowledge to understand the implications of what is going on.

As always with big Government driven IT programs, whilst there’s money to be made by the metering industry and consultants, momentum rules.  It seems perfectly justifiable to carry on and saddle consumers with a £12 billion white elephant which will further inflate domestic energy bills.  As a result of this lack of due diligence, smart metering is firmly on course to be the next big UK Government IT disaster.

It’s not that there’s a fundamental problem with smart metering, but there are massive mistakes in the way that the UK has decided to do it.  When the programme started, it was seen as world-leading.  It should have set a global standard for smart metering, giving UK plc a commanding lead in exporting expertise to the rest of the world and creating long term employment opportunities.  Instead it has resulted in an out-dated, over-complicated system which will be incompatible with any other solution in the world, cost more than any other, fail to deliver the promised customer benefits, add risk to our energy security, threaten jobs, further alienate customers and make the UK energy industry a laughing stock.

If we look at the issues, the GB smart metering program appears to have a unique capacity not just to duplicate major errors from previous Government disasters, but to combine many of them into one overarching Government- destroying fiasco.

Read More