(tpm)
Video, video everywhere, and not enough people to watch it. That’s the conundrum facing military and security personnel today, the people who sit in front of banks of monitors, watching hours of mind-numbingly mundane footage of people going about their business, yet must be attuned to any slight clues to a wanted suspect or potential crime.

But now, researchers at MIT and the University of Minnesota have created a new program to discern such signals from the video noise faster and more accurately than a human or existing automated system.

It’s a new type of smart surveillance system that “learns” from previously recorded video footage how to quickly scan realtime feeds and identify specific suspects. It can also flag unusual, potentially dangerous changes in an environment like an airport, such as when someone deliberately leaves behind a bag.

“The learning phase is very fast, not requiring more than a minute for the problems we explored,” wrote Christopher Amato, the leader of the effort and a postdoctoral candidate with MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

The new system developed by Amato and his colleagues is a software program called “Biologically Inspired Scene Estimation” (BIS-E). It builds upon current surveillance systems that require users to choose between various different vision algorithms to apply to a given screen or particular scenario.

But BIS-E takes this idea another step further, instead relying on higher-level algorithms that can quickly analyze a realtime video feed, compare it to the filters that were applied to previous similar footage, and select the best option of the available set, all without human direction.

In practical terms, this means the software can automatically identify and follow specific suspects based on facial recognition technology, or sound an alarm if a scene changes in a way that causes concern, such as bags being left behind deliberately in an airport lobby or people or objects moving in ways deemed “unusual” by the system...
(more)