What are we doing?
We're building a create-your-own wearable kit to bring machine learning and statistics concepts to students. We envision students asking questions around sports performance such as, "How can I improve my tennis serve?", "What percentage of my shots are accurate over time?" or "Why doesn't our basketball team ever win?". They would then use wearable sensors to collect data from the body, analyze it using provided tools, and try to draw some conclusion.
The bulk of this work involves building an app that allows the user to customize data collection and analysis from on-body sensors, and designing curricula to support use in secondary math and gym classes. This post will focus on our work on the application. To build the app, we need to enable four main functionalities:
Demo: Capturing video and sensor data
Our first goal was to make the Micro:bit's accelerometer data available in the app. We connected to the Micro:bit using bluetooth, calculated the magnitude of acceleration at each reading, and graphed the results over time.
At this stage, the accelerometer output was not actually synced to the video. In order to address this, we only have the app start collecting data from the accelerometer when the user presses the video record button, and stop when the user stops the video recording. The graph presents this data continuously over the time interval of the recording.
The graph also acts as a timeline. The user can tap on parts of the graph and the video will go to that particular timestamp in the video. We are also now graphing x, y, and z acceleration instead of magnitude because it will be easier for the user to interpret that data.
Check out what you can do with our app now:
An app like this generates a lot of data. We currently capture 10 (x,y,z) coordinates per second- for a five-minute video, we’d capture 3,000 points. We’ll also need to store segmented gestures, their labels, and classifications. This could get out of hand pretty quickly, so it’s important for us to develop a data storage system for the application. We’ll be using Apple's CoreData storage framework for this project because we are mainly concerned with local data storage (CoreData makes this pretty easy).
We’re also building out 2. Allow user to segment gestures, assign labels and classifications. Users will identify the beginning and end of gestures, and assign a label (i.e. “slapshot”, “pass”). They’ll also be able to classify each gesture as “good” or “bad,” although we’ll later extend this to include an option for multiple classifications. This is an important next step in creating a foundation for users to explore machine learning.
Abigail Zimmermann-Niefield, Bridget Murphy and Varun Narayanswamy