Researchers at MIT have created an algorithm that can create three-dimensional, “moving sculptures” from 2D videos of body movements.
In addition to being a fascinating visualisation of graceful movements over space and time, researchers believe the algorithm – called ‘MoSculpt’ – could be useful as a tool to analyse movements of athletes, dances and other trained professionals with skills involving physical movement.
Researchers from the Computer Science and Artificial Intelligence Laboratory (CSAIL) at Massachusetts Institute of Technology (MIT) used two-dimensional videos and photographs, the traditional choice for studying movement at present, to train MoSculp's algorithm.
These, the researchers describe in their study, do not reveal information about the underlying three-dimensional structure of the person or object in motion.
Often, the researchers explain, it is small and subtle movements that add to speed or precision that sport and artforms involving movement may demand.
Even the most high-resolution videos fail to capture the complete picture, they said.
"Imagine you have a video of Roger Federer serving a ball in a tennis match, and a video of yourself learning tennis," Xiuming Zhang, lead author of a new paper said to MIT press.
"You could then build motion sculptures of both scenarios to compare them and more comprehensively study where you need to improve."
So far, these would be analysed in research using a technology called ‘stroboscopy’, which produces what looks a lot like a flip-book with still images stitched together.
While these snapshots could help study movement in general, it still couldn’t reveal small movements like the trajectory of one’s arm when striking a golf ball, researchers pointed out. These may also be demanding at times, requiring a lot of setup, adjustments and equipment.
MoSculp, according to MIT press, just needs a video sequence.
When fed a 2D video, MoSculp’s algorithm picks up on points on the subject’s body, and tracks these points through the most certain poses along the video to create a ‘3D skeleton’. These skeletons are then ‘stitched’ together to create what appears to be an almost smooth, continuous range of motion. These can even be 3D-printed.
"This work shows how to take motions and turn them into real sculptures with objective visualisations of movement, providing a way for athletes to analyse their movements for training, requiring no more equipment than a mobile camera and some computing time," Courtney Brigham, communications manager at Adobe said to MIT press.
While the algorithm works well with processing videos with just a single object, the team is working to expand its capabilities to multi-person interactions soon.
MoSculp could aid in studies of social disorders, team dynamics in sports as well as interpersonal interactions, the researchers said.