Sunday 30 March 2014

Android Wear - Beyond Initial Preview

The current Android Wear Preview SDK's APIs define how to send notifications from phone/tablet to a wearable device. The notifications on the wearable could in-turn contain actions that could be used for short replies using voice (substituted with keyboard on preview emulator) or open applications on the phone/device. And while the emulator allows native apps today, it might not be the encouraged route in the release version. 

This post tries to understand and then speculate a bit about what to expect from the SDK based on information provided by Google. 

Near Future Release

According to Google, in the near future releases of the Wear SDK, following features are to be expected:

1) Build Custom UI: Create custom card layouts and run activities directly on wearables.
Custom UI :
This is the icon used for custom UI but is still very limited in what could be achieved as all these appear to be cards. Would we ever be able to process gestures. Will full fledged native applications remain a possibility. 
We're already familiar with cards, thanks to Google Now and Google Glass. The mock graphic image borrowed from the official wear dev site, shows navigation as a use case. 

Even with the very first preview SDK, we can already run full applications natively on the wear emulator. Apart from the limited peripheral/sensor options and ugly UI skewness due to limited display, these do work. However without voice actions in the initial preview, only way to launch apps, is over adb.

At this stage we are not sure if future versions will allow activities that are not limited by card swipe gestures. If limited, swipe gestures will be consumed by the system and only a limited sub-set of touch events will reach the activity/fragment/view. eg. Home screen app-widgets in Android don't ever get to process left-right gestures. As with immersive mode on regular devices, they may use edge gesture detection for system level swipe while still allowing all sorts of swipe gestures within the custom views. 

2) Send Data: Send data and actions between a phone and a wearable with data replication APIs and RPCs.

Send Data :
This icon doesn't give much information. Notifications are unidirectional (to the wearable) but the resultant actions are already in the opposite direction. The data replication or actions are perhaps bi-directional. 
Wear is not expected to be an independent device ... well, at least, not in its current incarnation. It depends on a phone/tablet for long range network connectivity. Short range connectivity with the phone itself is through Bluetooth. Past Bluetooth pairing at low level, the SDK will provide APIs for sending and receiving data, intents, RPCs.   

3) Control Sensors: Gather sensor data and display it in real-time on Android wearables.

Control Sensors :
Looks like the icon designer was reading up Asian spiritual stuff about yogi's who claim to control their senses and some even their breathing and heart-rate for prolonged durations. Bonkers. Medulla oblongata takes care of involuntary stuff. Oh but weed can help, can't it? 
Here again, the demand is from the phone/tablet to provide data. But unlike long range network connectivity, which will not see the light of market until much later, we can expect wearable devices with several inbuilt sensors to hit the market sooner; eg. medical monitoring devices. For the time being though, sensor data will be forwarded to the wearable accessory from the phone/tablet using this API. 

Google, through its service framework already provides centralized APIs for location and activity (cycling/walking/...) detection and triggers. Now they'd just forward this to the wearable device. Or would they already provide APIs in the other direction? Irrespective of the APIs custom sensors will be added to the wearables.

4) Voice Actions: Register your app to handle voice actions, like "Ok Google, take a note."
Voice Action Icon :
Apart from the characteristic red color used for microphone background in certain Google applications on Android, no clue why they chose this flat cube icon

Unavailability of voice actions is one of the biggest bummers in the initial preview SDK.

In other cases, we may at least provide input using hardware keyboard on the computer that is running the emulator. (Trick is to start typing before the fake voice recognizer times out.) But voice actions are just disabled with a blunt screen that informs us so. Using adb tool, we can substitute to some extent.

No comments:

Post a Comment

PG content please :)