Post

CondWand

Under the “why don’t more people talk about this”, the amount of ML inference that can be done locally on even pedestrian smart phones is truly impressive, and a field that I don’t think enough software developers are taking advantage of. This was the focus of a course I took in Spring 2023: ELEN 6908-Embedded AI. The final project was to design an embedded AI project, and I took inspiration from a project I had attempted and failed at years before. I played in our concert band from 4th grade through high school, and so was pretty familiar with conducting methods. I thought it would be cool to use the accelerometer from a Wii Remote to conduct and set the time of music. I’m not sure how I didn’t know this had already been done in Wii Music, but oh well. However, my coding skills weren’t up to the task, so I put the project aside. The development boards we were given for class had accelerometers, which gave me the idea to do it with AI. The first step was recording data. I strapped an Arduino BLE 33 to my wrist, and conducted for two minutes at intervals of 5 BPM from 60 to 120.

Arduino held to wrist with hairtie Advanced data gathering We then trained an LSTM model on it, since those are well suited to linear inference. Using that model, we could tell what tempo someone is conducting in on the fly. We decided against having it set the speed of music, but rather be a tool to help the conductor stay on tempo. So, the inferred temp is then compared to a target tempo, and feedback is given on the LEDs.

Github Project

This post is licensed under CC BY 4.0 by the author.