Tech Trends: Sensor Fusion Pushes AI Forward

Tech Trends: Sensor Fusion Pushes AI Forward
Tech Trends: Sensor Fusion Pushes AI Forward
Tech Trends: Sensor Fusion Pushes AI ForwardCombining data from multiple sensors can ease the strict reliance on video surveillance when creating deep learning algorithms



At October’s annual CONSULT conference in San Antonio, I had the opportunity to moderate a panel discussing the current state of artificial intelligence in the security industry. The panel included Quang Trinh from Axis Communications, Aaron Saks from Hanwha Techwin, and Srinath Kalluri from Oyla, and we discussed the many challenges in moving AI forward – particularly the area of AI known as deep learning.



While I have defined deep learning in previous articles, lets summarize the concept for this column’s purposes as a form of AI where computers are taught to mimic the human thought process through multiple layers of algorithms known as “neural networks.” Within the security industry, the holy grail of deep learning would be the ability for computers to interpret video feeds to identify behaviors that constitute a security threat. Video analytics have that same mission, but they are focused on finding discrete elements based on very narrowly defined rules.



Deep learning algorithms would have the ability to interpret scenes based on a much broader set of rules – many of which the computer has defined itself through its own ability to learn from data.



Still a Ways to Go

While some sales and marketing folks would have potential customers believe that we are near the end-state of deep learning, in reality we are far from it. There are a multitude of issues with deep learning.



For one, in order for computers to be able to train themselves on what constitutes a security incident, they need massive amounts of annotated training data to determine what constitutes an actual threat. While teaching a computer to find a red balloon is easily achievable due to the massive number of free images available on the internet, video clips of security incidents are much more difficult to obtain. In addition, these clips would need to be annotated (i.e., this one is a video of a fight, this is a video of vandalism, this one is innocuous, etc.). Libraries of vast security-specific datasets are rare or nonexistent, and certainly not in a publicly accessible internet location available to solution developers. This problem is compounded by privacy laws, which in many cases would prohibit the creation of these libraries in the first place based on data retention limits.



Sensor Fusion Advances AI

One way the industry can make beneficial use of AI technology today is by combining data from multiple sensors. The concept of sensor fusion involves using multiple sensor types to create a more robust picture of reality that can help detect threats while also providing data used to train AI learning engines.



One company focused on sensor fusion is Oyla, which combines video data with LiDAR to create a 3-dimensional picture of a scene, as opposed to traditional 2-dimensional video feeds. I spoke with Srinath Kalluri, Founder and CEO of Oyla, about how sensor fusion will help advance AI technology.



“Neural network-based deep learning models, when combined with sensor fusion, ‘learn’ the environment and get better with use (data),” Kalluri says. “This enables the user to train the AI to recognize and eliminate false alarms. AI models can also be used to classify the nature of threats, further improving the accuracy of threat assessment.”



Sensor fusion also helps us move beyond a strict reliance on video surveillance as the only data source.
https://www.civilengineering.ai/tech-trends-sensor-fusion-pushes-ai-forward/

Share this:

Post a Comment

 
Copyright © Ai4civilengineering