Run or Walk (Part 1): Detecting Motion Activity with Machine Learning and Core ML

Viktor Malyi
Towards Data Science
3 min readAug 5, 2017

--

There were a lot of announcements during recent Apple WWDC conference in June. One of the most remarkable is the release of Core ML framework that provides on-device machine learning capabilities to every developer. And this is revolutionary.

Revolutionary means that earlier, the one who wanted to use any somewhat modern machine learning model in their app had only choices which are not straightforward to configure and implement. And Core ML became exactly that only option to use ML model in your application in “drag and drop” manner. iOS does the most of the heavy lifting to generate interfaces which represent the model itself, its inputs and outputs.

This opened everyone an opportunity to implement an end-to-end solution: from collecting data and training the model up to utilizing this model in the iOS app.

With this prospect, the idea which I already had for a long time — automated detection of sports activities — became reality. I decided to prove a hypothesis that a pre-trained machine learning model built into an iOS app can classify whether the person runs or walks based on the data from accelerometer and gyroscope. Of course, there are way more other sports activities which can be detected in an automated way, but for the first attempt, I took those which should be clearly distinguishable from people’s point of view as well as from the data received from sensors.

Accelerometer data when the person is walking
Accelerometer data when the person is running

When investigating what similar tools are already available on iOS for this purpose, I found out that CMMotionActivity object of a CMMotionActivityManager class can detect whether the device is on the walking or running person, or that device is in an automobile or even is on a bicycle.

From the first look it seemed that this problem is already solved, but after digging deeper into the documentation I realized that Apple’s Core Motion uses data from a wide range of input sources including an accelerometer, gyroscope, magnetometer, and pedometer.

Can one predict whether the person is walking or running using only the fraction of the sensor data available? And how accurately? Will any machine learning model be able to master this task? And, finally, will it be possible to use that model in my iOS app later?

These are the main questions I will find answers to in the series of blog posts during the next several weeks.

Follow me to see what challenges I faced while collecting training data, designing, training and tuning the high-accuracy machine learning model, and building it into the iOS app.

Read next parts:

--

--