Airpods – a Speculative Teardown
- Published
- in Hearables
On 7th September, Apple announced the demise of the 3.5mm audio jack. Alongside that, they introduced their Airpods, helping to stoke the momentum for a new world of hearable devices, The loss of the jack was a move which generated howls of anguish from the wireophile community, along with a flurry of speculation about how Airpods worked as well as what Apple’s new W1 wireless chip was doing.
Having been working with wireless standards and hearables for several years, much of that speculation seemed ill-informed. Once Airpods come to market in October, companies like iFixit and Chipworks will take them to pieces and we’ll have a better idea of exactly what Apple have done. But those first tear-downs are still a few months away. So I thought it would be interesting to try a speculative teardown, based on how I might have designed them, and on the limited information which is in the public domain. I also think I know what Apple’s second wireless chip will be, and it’s not the W2.
Before I start, I should state that I have no involvement with Apple or any of the chip companies supplying them; I don’t own an iPhone 7, nor have been one of the lucky people who have had a chance to see or play with the early review versions of Apple’s Airpods. Everything you read from this point is best described as reverse imagineering. It’s all about the having the courage to guess.
The first thing you notice about eh Airpods is that they look totally different from almost every other wireless earbud. Others – Bragi, Nuheara, Doppler, Onkyo, et al, have all gone for a design which fits completely in your ear. Apple have basically taken the design of their existing Airpods and cut off the cables. It’s led to a host of parodies, of which I’d recommend the video by Conan O’Brien, but it’s a brilliant move. As I’ll explain below, it makes it much easier to solve most of the issues which have plagued other earbud manufacturers. Apple is probably the only company that could have got away with this form factor, particularly by supplying them in white. Anyone else would probably have attempted to make them more discrete, offering them in black, silver or flesh tones. Apple have eschewed that to make a design statement that couldn’t be much more “on your face”. Time will tell whether that aspect of the product has worked.
On to the teardown. For starters, it’s clear that the audio is using classic Bluetooth A2DP. That’s pretty obvious, as Apple and reviewers have pointed out that Airpods can be paired with and work with other phones. If they were using some special Apple protocol or other wireless system, that wouldn’t happen. We’ve also heard that Airpods can be shared between two people, each hearing the music without them being too close to each other. That suggests that the same audio data is being picked up by each Airpod, rather than being relayed between them.
This is where it gets interesting. Standard Bluetooth A2DP (Advanced Audio Distribution Protocol) is designed to send a single stream of stereo audio data to one Bluetooth receiver, which then separates it into left and right audio. A2DP can stream to multiple devices, but that’s more complex. There’s a white paper the Bluetooth SIG published ten years ago explaining how to do that. Each device needs to negotiate parameters such as codec configuration. It’s important that they choose the same ones, as they receive the audio as separate transmissions, which means they arrive at slightly different times and, if they have different parameters, may take different amounts of time to decode and render. If the stream is going to two different people that’s not a problem, but if it’s sent to left and right earbuds, it needs to be rendered within about 20 microseconds of each. Any more than that, the sound image will appear to be off to one side, and if the synchronisation drifts, the sound source will appear to move around, potentially inducing a feeling of nausea.
Companies have developed a number of proprietary ways to ensure they are synchronised. The first was from Cambridge Silicon Radio (now part of Qualcomm), who developed a method where one chip receives the stereo signal, then separates out the left and right and resends one of the channels to a Bluetooth chip in the other earbud. They included synchronisation signals, so that the audio output of both chips could be coordinated in time. You also need to do this if one earbud is a music player, streaming data to the other ear, which is a particularly difficult use case. Others have streamed over Bluetooth A2DP to both earbuds and tried to use Bluetooth low energy to send synchronisation signals between the ears. Others have done the same thing using Near Field Magnetic Induction (NFMI). Some have even attempted to stream audio between earbuds using NFMI.
The problem with all ear-to-ear communication is that the head is remarkably effective in blocking 2.4GHz radio propagation. Designs which fit snugly within the ear have real problems in getting a signal across. The bigger the earbud, the easier this is, as you can include a bigger antenna, but the best solution is to put the antenna outside the ear. Apple’s design does exactly that. By locating the antenna in the long white battery and microphone boom, it moves it closer to the jaw where there’s a lot less attenuation from one side of the head to the other. So I surmise they’re using A2DP to stream audio from the phone to each Airpod and probably using BLE control signals between them to synchronise the audio rendering. So there’s no need for NFMI.
I spent a few minutes wondering whether Apple might have extended their MFI implementation which apparently streams audio separately to left and right hearing aids using Bluetooth Low Energy. The Bluetooth SIG is currently developing a set of next generation audio standards which will bring audio to BLE, but that’s for the future. However, the use of BLE for audio would result in the battery lasting days, not hours, so BLE audio is clearly not in play here, although I would expect that Apple have designed the BLE part of the W1 chip to support that in the future. We’ll come back to what’s in the W1 later.
Although the battery life is not days, one of the marketing claims for Airpods is their five hour battery life and better audio quality. Some of the initial reviewers have noticed the improvement, particularly the range and the fact that the audio rarely breaks up. We also know that the W1 chip is being used in Beats’ Solo 3, claiming 40 hours, which is double that of earlier models. Apple states that the improvement is “driven by the efficiency of the Apple W1 chip”. So how have they done that and how much is just marketing?
Let’s start with audio quality. I see no reason why Apple would move away from the AAC codec they’ve used since the iPod, so there will no change there. It’s possible they may have lowered the bitrate as they’re now sending two streams and air-time coexistence would improve with less Bluetooth activity. However, the main issue with wireless audio quality isn’t due to the codec or bitrate. It’s those annoying clicks and pops which upset people, which are generally caused by fading in the radio signal. Interference can be a problem, but Bluetooth’s adaptive frequency hopping generally takes care of that. I’ve seen a number of articles suggesting that Apple is discarding some Bluetooth features or using proprietary protocols, but that’s unlikely, otherwise the performance improvement would only be seen with their own phone. The could have done that, but it they had it would not have made sense to use the W1 chip in the Beats headset, as it would largely confine their market to Apple users. The far better, and universal approach is to try to improve the quality of the radio signal to increase the chance of packets being received.
My guess is that’s what they’ve done. The problem is analogous to getting heard in a noisy room – speak louder. For a Bluetooth radio, that means increasing the output power, but as you push it up, the power consumption increases substantially, driving down the battery life. It’s likely the Bluetooth radio in the Airpod is running at around 10dBm, which is a bit of a sweet spot. As well as shouting a bit, you can also improve the link budget by listening better, which is achieved by improving the radio’s receive sensitivity, typically by add a low noise amplifier. So I’d guess there’s one of those in the W1.
Getting the five hour battery life may just be natural evolution. Bragi’s Dash has around 3 hours, but their design has a lot of other sensors and is also based on a chip which was introduced in 2011. In general, each spin of a wireless radio chip to a smaller geometry reduces the power consumption by about 20%. A chip released today would be about two generations further on, so that simple fact, combined with the size of battery, is probably all Apple needs, and would explain how Beats get to their 40 hours.
That brings us to the Airpod’s battery. Another advantage of having the long tube is you can fit a decent sized battery in it (certainly compared to other earbuds or hearing aids), which allows the power to be increased. However, Apple has done something else which is very clever. They’ve designed a charging unit which is small and which encourages people to put their Airpods back in the charger as soon as they take them out of their ears. That means they’re not sitting on a desk or in a pocket looking for an ear and using power while they do so. It’s an inspired design detail. If you do leave them out, they have two optical sensors to turn them off. (Two makes it easier to detect they’re somewhere other than your ear.) The sensors will only need to run on a duty cycle of 0.1% or less, so will have very little effect on power consumption.
A key feature of the Airpods is support for Siri, which is being promoted as the way of controlling what you play. That means you need some decent microphone technology to make sure you capture the user’s voice, rather than ambient sound. Putting a microphone at each end of the battery tube is an obvious way to do this, allowing a degree of beam forming to take place. Once again, the physical design gives them a clear advantage over in-ear designs. Having two microphones almost certainly means there’s a DSP core in the W1 to process the audio. I’d also expect Apple to take advantage of the accelerometers to detect bone vibration, allowing a further level of processing to extract your voice from background noise. Unless they’re doing that, one would be enough for detecting Siri taps.
CNET’s reviewer noticed that the optical sensors determine the primary earbud for phone calls – selecting the first Airpod to be inserted as the dominant one. As this appears to be notified to the phone regardless of whether you’re streaming music, it reinforces the fact that they’re using Bluetooth low energy for control, rather than AVRCP, although it looks as if this can be selected by the user as an option both for iPhones or other phones.
The “magical” pairing also tells us that BLE is in use. The Bluetooth features that are needed for the proximity based pairing have been in the spec since 2010, but Apple has been the first to put them together so intelligently. In the various videos, we see the connection screen pop up on the iPhone within two seconds of the charger lid being opened, with both Airpods being paired to the phone and ready to use within a further five seconds. There could be NFC involved, but I doubt it, mainly because the iPhone app shows the battery life of the charger as well as each Airpod. That suggests the charger has a W1 chip as well. If that chip starts advertising (think iBeacon) when the lid is opened it would bring up the pairing app, and pairing would start once the user clicks “Connect”. If the charger is already paired with its two Airpods, it can easily share the credentials between them and the phone in the few seconds remaining. An alternative approach would be for the charger to talk to the two Airpods via the split-ring charging contacts at the bottom of the stem, but that feels like extra, unnecessary complexity which might go wrong. And the iPhone 7 shows that Apple doesn’t like mechanical connections, whether that’s the home button or the 3.5mm jack. So I’m going with the third W1 chip.
What is clever, and will be Apple proprietary, is the way in which they transfer credentials and security keys between all of your Apple products in the background. Since IoS 10 was announced, it’s been interesting to speculate why the iPad mini and iPhone 4s were not supported. My guess is that these incorporated the first spin of Broadcom’s Bluetooth dual mode chip, which lacked some of the features needed to support the enhanced security of Bluetooth 4.2. Without that, you really don’t want to be sharing security keys over the air. By the time the mini 2 and the iPhone5 appeared, a newer generation of chip would be in their Wi-Fi / Bluetooth modules, allowing them to develop this new pairing process with Bluetooth 4.2. It could, of course, be more prosaic, with Apple just not wanting to support such old devices.
What else? The accelerometers are also there for control, detecting when you tap them and using that to instruct Siri, or sending AVRCP commands to another brand of phone. Reviewers have expressed annoyance at the fact you have to pause music whilst Siri listens to your commands. The problem here is that A2DP is one way – it doesn’t support a return audio stream. It would also require a more complex mixing and noise cancellation ability to separate out your voice from the incoming music track. Those are harder problems to solve and my guess is that Apple is leaving them until it sees the user acceptance of the Airpod and how people use Siri. There’s enough innovation in the Airpods for a first release without trying to be too clever. Bragi showed us what happens when you take the opposite approach.
That’s the speculative teardown, but what does that tell us about the W1 chip? From the above analysis I’d expect to see:
- Bluetooth 4.2 dual mode with support for A2DP and the features needed for the next few Bluetooth releases, as well as Apple’s MFI audio standard.
- The ability to relay Bluetooth to a second W1 chip.
- Output power of 10dBm.
- An integrated LNA, giving receive sensitivity better than -93 dBm.
- Stereo audio outputs (to support Beats’ wired headsets), but probably not an audio amplifier, as earbuds only need one. I’d expect that to be an external chip – probably the same Cirrus Logic part that’s in the iPhone7.
- A low power DSP for audio beam forming, with the capacity for echo cancellation, noise reduction and noise cancellation for future releases.
- A sensor fusion hub to support more complex accelerometer applications in the future.
- A competent low power microprocessor to run it all, along with enough memory to support future applications and OTA updates.
- No NFMI
So why would Apple make such a chip? Developing a wireless chip with this spec isn’t cheap – it will probably cost at least $10m with a similar amount going on the stack. It is just a peripheral chip, which will never go into a phone, so the volumes are not that high. But it is interesting that Apple emphasised that this is their first wireless chip, in a tone suggesting it won’t be the only one. My guess is that Apple wants to incorporate Bluetooth and Wi-Fi into their next generation of processors, as that’s what their competitors are doing. Wireless can be difficult, so it’s a big step to do that in one go. Far better to design the wireless chip and evaluate it in an entirely new product category, as well as forcing one of your subsidiaries to use it. That way you get to test the chip as well as getting useful feedback for your next generation of earbuds.
When Chipworks take an Airpod or Solo 3 to pieces and unpot the W1, I expect we will see that it’s a multichip module. The main chip will be a Bluetooth dual mode one designed to be Bluetooth 5 compatible, but the DSP and micro are likely to be separate dies. If Apple is using this as a test vehicle for incorporation into future processors, then we may find 802.11 there as well. If that’s the case, we can be pretty sure what the wireless roadmap is. Not a W2, but the A11.
As I said at the start, this is all speculation. Come the end of October, when people start sniffing the airwaves and iFixit and Chipworks start taking their Airpods apart, we’ll see how accurate it is. At which point I’ll either be feeling smug or possibly deleting this blog.
I’m pretty sure it’s not NFMI in the case of the airpod, rather than some proprietary cleverness within Apple’s W1 chip. Otherwise it would not work when you share one airpod with a friend.
The FCC report shows an odd loop in the flexi-circuits
https://fccid.io/BCG-A1722/Internal-Photos/A1722-Internal-Photos-3118398
Might be 2.4GHz BT, but that might be more likely down the side of the battery?
Could be inter-ear Near Field Magnetic Induction (NFMI)?
This is a perennial topic which keeps coming up, whether it’s Bluetooth Wi-Fi, cellular, TV or overhead power cables. To the best of my knowledge, no-one has yet found any evidence of an issue with the levels of exposure we experience in everyday life. You’re probably at higher risk from worrying about it.
From a purely technical viewpoint, Bluetooth earbuds are likely to be safer than wired earbuds. Which is rather ironic considering the article you’ve attached.
The reason for that is that in the case of earbud design, you want to keep the antenna as far away from the ear canal as possible. Water (which is in abundance in the head) is very effective at attenuating 2.4GHz radio transmission. That means that if you want earbuds to work you need to make sure that the antenna is as far outside the ear as possible. The Airpods are a good example – the antenna is in the white tube outside the ear. The Bluetooth radios don’t just connect to the phone, most earbuds need a wireless link from left to right to ensure they stay in sync, so putting the antenna on the outside is very important.
Most of the time, the transmit power of a Bluetooth earbud is only just above 1mW. In comparison, a mobile phone held next to the ear is typically 500mW. Even if you use a wired headset, the audio cables connecting them to a phone can act as an antenna and may be pushing several tens of milliwatts up to the ear. So, although it may be counter-intuitive, having a Bluetooth transmitter in your ear is probably the safest option if you’re worried about RF exposure.
My personal view is that RF exposure from earbuds will turn out to be another pseudo-science worry, akin to the nineteenth century concern that women’s uteruses would fly out of their bodies if they travelled on trains going faster than 50mph. More productive areas of research might be:
• Sound pressure levels over an extended period. There’s not a lot of definitive research on this yet, and we probably won’t see the effects for another 5 – 10 years, but listening to music for 10+ hours a day is not something we know much about.
• Sealing the ear for long periods. Some earbuds have a fairly efficient seal which means that the ear no longer “breathes. I’ve seen a few reports expressing concern about that, and the fact it may provide a favourable micro-climate for bacteria, but once again, there’s not much research.
We do need to determine if there are risks whenever we start making fundamental changes to the way we live. We now have over 20 years of mass usage of mobile phones without any indication of physical damage from RF. We do have evidence of social change from the addictive effect of phones (read Sherry Turkle’s Reclaiming Conversation) and early hearing loss from listening to loud music (WHO report on deafness and hearing loss). What is important is that any reports and advice are based on evidence, not just on scare stories which are reported by the more sensational elements of the press.
Thanks for the analysis. The airpods have impressive technologie… better than others. But generally it brings the radiation close to the brain. What do you think about the risc of cancer?
https://www.20min.ch/digital/news/story/Airpods-koennten-Krebs-verursachen-13287722
For ANC you’re going to need a DSP capability and it’s not clear whether the current AirPods have that. Apple have sensibly concentrated on getting the basics right, rather than loading the Airpods up with tech just for the sake of it. So we may need to wait for AirPods 2.
The Bluetooth version is not really relevant in this. You need to perform ANC locally, as the latency in doing the round trip to and from the phone would be far too great to be effective. However, that local processing is getting easier, as we’re seeing a new generation of MEMS microphones which have inbuilt DSPs, reducing the processing load on the main earbud processor. Bluetooth 5 is essentially foundational for the next release of Bluetooth, which is going to allow audio to be streamed over Bluetooth Low Energy. But those products are still a few years away.
I’m currently writing an article speculating about what the technology roadmap for Airpods may be, so keep an eye out over the next few weeks.
What a great piece! Hooked and Subscribed instantly 😉
So you’re saying that AirPods can get ANC functionality in the future, even with the same specs, does that mean more work for the W1 on BT 4.1?
Could it also be that ANC will be computed on the phone/source, and then simply transmitted, once BT 5 headphones start becoming the norm?
Thanks!
Hey Nick,
Now that the AirPods are out, were there any validations to your speculations here? I can validate that they do have audio going to both ears during a call (HFP). Does this mean they are definitely doing some BT retransmission from a primary airpod to a secondary airpod instead of 1 source 2 sink A2DP? If so, then when you take out the primary, the secondary must switch to the primary receiver, but how do they do that so seamlessly?… Since you would expect the phone source would have to switch to sending audio to a different BT sink?
My guess to the strategic goal is for Voice Centric User Interface, just what Echo did for Amazon ecosystem. The apple ecosystem is based on its computing devices (Mac and iPhone). So a wireless Voice I/O device is important to enrich this ecosystem. On the market today, we don’t find feature rich system (BT, Wi-Fi with powerful DSP for AEC, NR and BF), that’s why Apple need to build one.
W1 with future Wi-Fi feature OTA could be the main CPU for its smart speaker, just like Echo.
Thanks for that. There’s an interesting question in the comments, which is whether the left and right airpods may be different, as they apparently have different FCC IDs. That could just be because they’re physically different – effectively left and right handed. But it could also mean there is something more.
iFixit did a teardown for it yesterday. https://www.ifixit.com/Teardown/AirPods+Teardown/75578
We’ll probably never know, as this detail will be buried inside the chip and almost certainly won’t emerge from a teardown. The patent involved was U.S. Patent 7016654, which describes a method for power control. It’s probably more relevant to Wi-Fi than Bluetooth classic, but it makes perfect sense to include it in the new W1 chip. It’s impact would be on battery life. If applied to the Bluetooth link in an Airpod, it would probably allow more dynamic power control than the standard Bluetooth specification method, which could be useful when you have different path losses to left and right Airpods. However, I’m not sure the gain would be worth the additional complexity, at least for a first generation product. Apple don’t generally push the bounds of wireless technology, but concentrate on getting the UX right.
Check out edgewater wireless. They specialize in wifi technology , multichannel wireless , reducing interference, etc. they hold 24 patents. A lot of their tech applies to any wireless protocol in general. In 2015, apple bought a patent off them for front end power effiency. The deal had “strong commercial terms attached “. I beleive it is Edgewater’s tech that apple used to enhance the airpods when they were desiging them with the patents acquired during the passif semiconductor acquisition. Confident that when the teardown reports come out, we will see technology from edgewaters patents inside the airpods/w1 chip. Take a look nick !!!, edgewater tech coincides with some of the methods you described in this blog
Yes, but if you dig into the detail, it supports Basic Rate, Enhanced Data Rate and Bluetooth Low Energy. BT 4.0 covers everything that came before – BR, EDR and High Speed. It’s not just BLE.
As the registered info in BT’s Web, W1 is BT4.0.
Gudday Nick,
I bought a pair of the Beats Solo 3 headphones today, and did some testing on how they connect between my iPhone 7, iPad Pro 10″, Apple Watch Series 2, MacBook Air 13″ (mid-2013) and MacBook Pro 17″ (early-2011). All running iOS 10, Watch OS 3 or macOS Sierra, all same iCloud account.
I found that the W1 chip in the Beats connected well with the iPhone 7 and Apple Watch, but there was a delay (an hour or so) in seeing the Beats in the Control Center of the iPad Pro (the Beats were in the Bluetooth settings screen, but not the Control Center).
They quickly appeared in the Sound icon on the MacBook Air menubar, but would not appear in the MacBook Pro menubar. I did a little digging in the System Report screens of the Macs, and there’s a few differences in their Bluetooth settings, namely the Bluetooth Low Energy Support, Handoff Support and Instant Hot Spot Support. These were all ‘Yes’ for the MacBook Air, but ‘No’ for the MacBook Pro.
Just some food for thought for you, not an exhaustive review by any means, but I suspect the BT LE may be used for the connectivity as I think you suspected in your blog..
Cheers mate, and thanks for the pre-teardown ;O).
Nick
It would almost certainly be HFP. The fact that you lose music when you’re using Siri suggests that they transition from A2DP to HFP for voice, and I’d expect it to be used exclusively for phone calls.
It’s difficult to tell from the reviews whether the audio goes to both ears, or just one. One of the reviews suggests the latter. That would certainly be a simpler implementation. We’ll need to wait for some more reviews to determine whether that’s the case.
What about during a phone call? What protocol would they use? Since A2DP is just one way, you would need bi-directional for a phone call since you are transmitting audio and receiving audio simultaneously. But this protocol would also have to support two sinks and one source (assuming you hear the phone call in both ears, and it just uses the beam-formed microphone audio from the dominant ear).
I couldn’t agree more.
can’t wait to get a proper teardown of airpods!