At Google IO this year, Google announced a new Activity Recognition capability for Android as part of the new Google Play services. It is realized as an API which relies on low-power sensors and a machine learning classifier to track users’ activities. We did a little bit of experimentation to test this API to learn how accurate it is and how it can be exploited in the field.
One very interesting aspect of this API is that it does not require any connectivity with the Internet nor GPS. Yep – it barely uses the battery at all.
The Activity Recognition API allows developers to make more contextual applications which can react not only to user’s interaction with his smartphone, but also to what s/he is doing at any time.
To test the API, we first created a small app based on resources from Google – we then uploaded it to the phone. We then tested the app with emphasis on some of the activities recognized by the API – driving, walking and biking.
We just performed a basic initial test to get some feel for how well this works. The experiment was not entirely rigorous – we were not trying to do a systematic performance analysis on these capabilties. You can see what we did in the video. It is probably a bit long and too focused on specific results – thumbs up appreciated in any case 😉
For convenience, here are the correct clock times of our activities:
- 21:41 – 21:54 – Driving
- 21:56 – 22:04 – Walking
- 22:04 – 22:13 – Driving
- 22:22 – 22:28 – Biking
During the experiment we observed the following:
- recognition of a driving is very accurate – some issues do occur when waiting at traffic lights or stuck in traffic – in these cases, we get a lot of “Still” statuses from the API;
- going by foot is by far the most accurate – we tried both holding the phone in hand and keeping it in the pocket and the results were very stable and reliable in both cases;
- riding a bike was the least accurate with a lot of “Tilt” and “Unknown” statuses.
Here is the graph with the results (thanks Cristi for creating it!):
The API provides a confidence level with its results; indeed, it can provide multiple results simultaneously with different confidence levels. In the graph above, we have chosen that activity with the highest confidence level (in cases in which multiple activities were returned). In the background (light colors) there is the actual activity manually inserted as a reference.
As you can see the second car trip had more recognitions of standing still which is mainly caused by Dublin’s traffic lights, which are set up for very long cycles – and we were not the lucky green light hunters.
We think that this particular Google Play Service is another step towards making the Android Smartphone more proactive device. The possibilities for app development are huge in the area of anticipating users needs and offering relevant services when needed. But in the end – it’s all up to developers, isn’t it?