Research Software Blog: Turning Movement into Data

Contributed by Scott Henwood, Director, Research Software Program

So far, our Research Software blog posts have focused on projects that fit into traditional big data areas of scientific investigation such as astronomy and particle physics. Researchers in these areas get their raw data from large, expensive instruments such as space-based telescopes and particle accelerators. In recent times, data acquisition by mobile devices and the Internet of Things have made vast amounts of inexpensive information available to researchers in disciplines that have not traditionally been considered “big data”.

In this post, we look at Movement Plus Meaning (m+m), a Research Software Platform dedicated to observing and understanding human movement, created by researchers at Simon Fraser University. We’ll also explore the software technology underlying this Platform, which has uses far beyond movement analysis.

Using human movement as input, computers can be taught to recognize and distinguish different types of human behaviour.

dancemovement

Why Understanding Human Movement is Important

You may be reading this on a mobile device such as a smart phone or tablet. If so, it is likely you can interact with this device through gestures made on the screen – swiping to scroll or switch pages, pinching to zoom and so on. As consumers, we become aware of new technologies like this through marketing efforts associated with the roll-out of new products. What we don’t usually see is the fundamental research required to make these devices a reality.

The m+m Platform supports research into all kinds of human movement, including gesture. At the heart of the Platform is the motion capture studio, a space in which the movements of one or more people can be captured by strategically placed motion capture and video cameras. By making the movement data available to computers for analysis, m+m enables research into a wide variety of issues related to human movement.

Movements created by athletes can be analyzed to improve performance and reduce injury. Infants and young children can be monitored as they grow to understand the normal developmental progression of human movement. This information can then be used to detect developmental problems in other young children as early as possible. Using human movement as input, computers can be taught to recognize and distinguish different types of human behaviour. Such an ability could be used in video security systems to automate threat detection.

How it Works

At its core, the m+m Platform contains software that connects input devices (called “sensors”) to output devices (called “effectors”). In human movement research, sensors include things like motion capture cameras or the Microsoft Kinect, while effectors include things like a video monitor for displaying a virtual environment, a holographic display of a human skeleton, or a robotic arm.

m+m makes use of high-speed networking technology to support multiple inputs and outputs simultaneously. This architecture allows collaborations among people in different locations. Assume you were interested in understanding how people interact in virtual meetings as an alternative to expensive and time-consuming travel. m+m would allow multiple people at different locations to attend such a meeting through avatars in a virtual environment. Thanks to the networking technology, all participants would have a unique, synchronized first-person view of this environment.

Other Applications of this Technology in Research

Although designed for human movement research, the ability of the m+m platform to connect arbitrary inputs to arbitrary outputs makes it well-suited to many other research disciplines – any research situation requiring inputs to act on outputs in real time. For example, using a hand gesture sensor, new methods of controlling remotely operated vehicles, such as those used in ocean exploration, could be studied. In genomics research, exploring complex 3D data sets can be navigated through real-time collaborative gestural or whole-body movement.

In medicine, this technology can be used to allow surgeons to practice procedures in a safe, virtual environment. In an emergency when real surgery is required and a surgeon cannot get to the patient, this technology allows a surgeon to control a remote surgical robot, perhaps from thousands of kilometres away.

Inputs to m+m do not even have to come directly from physical devices. A system that enables research into aircraft accidents could feed sensor data collected from the black boxes of an actual aircraft into a flight simulator to study pilot reaction.

Into the Future

The possibilities are limitless.

The m+m team has already entered into a collaboration with one of CANARIE’s other funded research projects, the VESTA video analysis platform from CRIM, to add video analysis to captured motion data. This will eliminate the need for subjects in motion capture rooms to wear special markers to assist in motion detection tracking. They have also entered into a joint investigation with another CANARIE-funded project, REALM from Western University, to evaluate and prototype connecting a  robot arm to m+m. Imagine using a Kinect camera to control robot movement in real-time disaster response or in environmentally-hazardous geographical situations.

To allow others to make use of m+m in their own research, the team at Simon Fraser University has released installation packages for both Windows and Mac hosts, and have made the source code available to everyone under an open source license. They are currently developing a cloud-based infrastructure to further enhance m+m for distributed use.

It’s world-class science, done right here in Canada!

For More Information