We started with a simple but ambitious hypothesis: we can automate developmental assessment of an infant from passive observational data (video) of infant movements. We just have to figure out where the infant is in the video (pose estimation) and then what kind of movements are transpiring (activity classification). Finally, we use a clinically validated assessment tool and the extracted activities from video to decide if the infant is developing typically or if they need follow-ups. Simple, right?

 

We have completed several studies in which we gathered high-quality infant movement data and built Machine Learning (ML) engines to determine infant pose and perform motor activity classification. Our recent publication (see below) focuses on an essential problem that could thwart automated extraction of infant development metrics. A clinically validated assessment protocol such as the Alberta Infant Motor Scale (AIMS) has 58 motor items that an expert assessor looks for in the process of evaluating a child. FIFTY EIGHT! That’s a whole lot to automate. A state of the art human activity classification engine typically might handle less than twenty. So, the question is: are all 58 items important? Can we use 20? 15? If yes, then which ones?

 

The good news is, you can make do with an abridged assessment tool that does not require 58 items. Using far fewer items can still produce valid results in assessing an infant. Fewer items means shorter time to evaluate the infant. And, best of all, easier to build a fully automated ML system that can analyze home videos and screen for developmental delays.

 

Check out our recent article on Elsevier’s journal of Early Human Development.

 

https://authors.elsevier.com/a/1gek21M28Baff-