Will Apple drive new ecosystems with WWDC 2018?

Written by jorge.serna | Published 2018/03/15
Tech Story Tags: apple | development | wearables | amazon-echo | api

TLDRvia the TL;DR App

Providing new capabilities to developers, can turn the Apple Watch or the HomePod from iPhone accessories into their own category

Apple will hold its developer conference this year during the week of June 4th. There are several rumors around new device announcements, but for me the relevant aspect of this event is actually in its name: developers.

Apple will talk in the different sessions about new capabilities for its operating systems (iOS, Mac OS, Watch OS, TV OS) to be released later this year. They will also show developers how to use them to extract more value from Apple’s platforms, which will also increase the value of Apple devices to their customers.

This is Apple’s ecosystem strength: get developers into their platforms so that more consumers (and additional developers) are driven to them, creating a reinforced virtuous cycle.

The ecosystem challenge

Currently, Apple’s main revenue source and main ecosystem is built around the iPhone and its operating system, iOS. The iPhone pull is being leveraged by new devices that currently are mostly accessories: the Apple Watch, AirPods or even the HomePod smart speaker.

But smartphone sales, including the iPhone, are reaching a certain point of saturation, more driven by replacement cycles than by new acquisitions, as Tim Cook himself recognizes:

So it is becoming important for Apple to make the jump into a new device cycle, less driven by the phone. I see two relevant pieces here: the Apple Watch and Siri.

The Apple Watch ecosystem

As I have discussed before in other posts, it is difficult for the Apple Watch to become an ecosystem due to its status as an iPhone accessory. The iPhone being around reduces the incentive for developers to create Watch-specific experiences. This is why many developers have discontinued their Watch applications, like Google and Amazon or even Slack, and many other never even tried.

The cellular version of the Apple Watch may drive the phone-less usage, and thus create the right incentives for developers so that it becomes an ecosystem by itself. But there are currently two main limitations to this:

  • The actual usage of the cellular version. While the Apple Watch itself is considered a success by most analysts, the penetration for the cellular version is still very small, with recent reports puting it at 13% of shipments.
  • The capabilities available for developers in WatchOS, which are still not as flexible as the ones available in iOS.

Both limitations are actually related. For instance, the advantage of being able to stream music in the cellular version of the Watch is diminished when it is restricted just to Apple Music, which is the current situation. But Spotify cannot provide a similar experience to its users because WatchOS does not provide streaming capabilities to developers.

Calling as a key feature

Another relevant limitation is the ability to make and receive calls in the Watch in phone-less mode. The cellular version of the Watch supports calls, but only for the regular calling service (provided by operators) and Apple’s FaceTime audio. Applications like WhatsApp or Skype are not able to provide that same function. In fact, due to limitations in Apple’s CallKit offering, you cannot pick up a WhatsApp call in your Watch even if your phone is close by.

Calling may sound like a very specific use case, maybe not that relevant for many users, and definitely only for some small number of developers. But Apple has put so much focus on their marketing approach for the cellular version around calls that this is also having an impact on the device penetration: few operators are currently able to offer the Apple Watch cellular version.

I covered the complexities about providing regular calls in a cellular Watch in a previous post (even if I failed in some specific predictions), so I will not go into those details here, but the impact is that outside the US, cellular Watch availability is very limited.

But if Apple allows third party services to provide calls in the Watch while phone-less, they could keep pushing that use case from the marketing perspective (and also the music streaming one) while making it simpler for operators to support the device, increasing its potential reach and so its attractive for developers.

The Battery issue

Of course, one reason Apple is not pushing the phone-less usage, and so restricting streaming and communication services, may be due to the fact that battery life in the Watch only allows limited phone-less usage. While independent usage is critical for the Watch to become an real ecosystem, the situation today is that is not practical to consider leaving the phone at home for a long period and just rely on this device. The Apple Watch cellular option is just for casual usage while running errands, going to the gym or maybe a short walk.

If the Apple announcements in WWDC around Watch OS provide streaming and voice communication capabilities to developers, or maybe even other options that would drive a more engaged interaction with users when a phone is not present this will point to two things:

  • That the next generation of Apple Watch will have significant improvements in battery life, so we should be excited about the Series 4.
  • That Apple is really pushing for Watch OS to become a new ecosystem, eventually to cannibalize iOS itself.

The Siri ecosystem

The other ecosystem that Apple should improve in WWDC 2018 is the one around Siri. This week an article from The Information has highlighted not only Siri’s struggles but also a certain lack of strategic direction at the heart of many of its problems.

And some of these problems show up in reviews for the HomePod highlighting that while the audio quality experience of the device is amazing, its performance as a smart speaker is way behind what Amazon offers with Alexa for its Echo line.

This is to some extent related to the “schizophrenic” behavior that Siri shows across devices, which demonstrate that multidevice ecosystems are still a hard problem to tackle. For instance, one of the features highlighted by the HomePod is that it does not have conflicts with other Apple devices (for instance, an iPhone) when saying “Hey Siri”. There will be no conflict because only the closest device will actually process the voice interaction, which in theory sounds like a great option. But since the functionality offered by Siri in each device is somewhat different this becomes an user experience problem. If I try to call for an Uber via Siri expecting the iPhone to pick the request, but my request is picked up by the HomePod which does not support that, the feature has not been particularly helpful. I have also discussed in another post the issues I see with HomePod depending on a close iPhone for iCloud functionalities, specially due to the limitation of only supporting a single user for this. All of these issues amount to Siri’s experience across devices being unpredictable and confusing.

Last, but not least, the abilities that developers have to increase Siri’s capabilities are very limited. Apple provides SiriKit in iPhone for applications that want to provide some of its functionalities using voice, but the use cases are quite restricted (calling and messaging, ride booking and restaurant reservations, note dictation and a couple more) and not available in the HomePod at all.

All this adds up to Siri not really being a sustainable ecosystem for developers today, and so not being as valuable to Apple as it could be.

What Apple could do for Siri

There are many things Apple could announce for Siri during WWDC 2018 that would help solve this and make it a more viable platform both for users and for developers. My favorites are:

  • Multi-user support in HomePod based on voice authentication. Opening this capability also to developers (something that currently Alexa is not providing for its Skills) could also drive a new trend of secure voice applications, that would only offer certain options or information if the speaker has been properly identified.
  • A coherent behavior for Siri across all devices. Allowing users to do the same things in the same way regardless if they are using Siri from their iPhone, Apple Watch, HomePod, Mac or Apple TV. This kind of foundation would have to make behaviors independent from device or at least allow devices to communicate with each other to complete tasks. But this would also mean that a SiriKit capability provided by an iPhone app, like sending a WhatsApp message or hauling an Uber, would be available across devices, which in turn makes SiriKit much more attractive for developers.
  • A more flexible SiriKit, from new capabilities, to an app-less experience available for developers. If Siri apps become independent from the device, why not go all the way and have some sort of “SiriKit for iCloud”, which would allow developers to provide Siri capabilities without the user having to install an app in their iPhone, or even without an iPhone if they are just using Siri from their Mac. This would also make the Apple Watch ecosystem much more valuable, as more and more developers could provide value to it without actually creating a Watch OS app, and users could access this functionality while phone-less too. This would be basically copying Amazon’s model for Alexa Skills, but the scale provided by the Apple devices ecosystem could be the real way to challenge Amazon’s current leadership on the smart assistant space.

If Apple really takes the chance to move into the next wave after the iPhone by empowering the (currently just potential) ecosystems around Apple Watch and Siri, we could start seeing developers creating amazing new things pretty soon. I hope we can see some of that by WWDC 2018 in a few months.


Published by HackerNoon on 2018/03/15