CVPR, a premier annual computer vision conference, is just around the corner kicking off this coming Monday, June 23 2014. Kitware will be actively participating in the main CVPR conference as well as several co-located workshops and shortcourses as leaders in computer vision technology and research including:
* Bronze sponsor of CVPR 2014
* Invited talk on Scene Segmentation
* Demos on Complex Activity Recognition Algorithms and Functional Scene Element Recognition algorithms
* Emerging Topics in Human Activity Recognition tutorial
* Complex Activity Recognition poster
* Functional Scene Element Recognition paper
Feel free to reach out! There will be many Kitware team members in attendance and if you would like to set up a time to meet with us to discuss potential collaboration or how Kitware can help you meet your current computer vision research challenges, please contact us at (518) 371-3971 or computervision@kitware.com.
CVPR 2014 Kitware Participation: Invited Talks, Demos, Tutorials, Posters, and Papers!
Invited Talk by Dr. Anthony Hoogs on ‘Video Scene Segmentation and Recognition by Location-Independent Activity Classes’
As an industry leader, Dr. Anthony Hoogs, Kitware’s Senior Director of Computer Vision, was invited to speak at the Workshop on Perceptual Organization in Computer Vision. This long-standing workshop on segmentation emphasizes research that is driving innovation in the computer vision field. Dr. Hoogs’ presentation will cover the recent ICCV 2013 paper by Eran Swears, Anthony Hoogs and Kim Boyer, as well as previous work at ECCV 2010 by Matt Turek, Sangmin Oh, Roddy Collins, and Anthony Hoogs.
Demonstrations on ‘Complex Activity Recognition Algorithms’ and ‘Functional Scene Element Recognition Algorithms’
Dr. Hoogs and Eran Swears, a Computer Vision R&D Engineer at Kitware, will provide two demonstrations on ‘Complex Activity Recognition Algorithms’ and ‘Functional Scene Element Recognition algorithms’. These demonstrations will highlight Kitware’s newest technologies, in particular Kitware’s functional scene element recognition and a prototype system that enables users to construct a model of a complex pattern of events with spatio-temporal relations. The system automatically detects instances of the model in surveillance video despite missing and erroneous tracks, events, and spatio-temporal variability. It was developed with a user friendly graphical user interface (GUI) to enable users to examine the detections interactively.
Make sure to be on the look out for additional demonstrations being provided through the entire event that showcase other Kitware technologies including but not limited to, wide area motion imagery (WAMI) and full motion video (FMV) capabilities!
Half-Day Tutorial on ‘Emerging Topics in Human Activity Recognition’
Sangmin Oh, a Computer Vision R&D Engineer at Kitware, will teach a half-day tutorial on ‘Emerging Topics in Human Activity Recognition’ in collaboration with Michael Ryoo (NASA Jet Propulsion Laboratory), Ivan Laptev (INRIA), and Greg Mori (Simon Fraser University). He will present and discuss the latest status of computer vision research in video understanding across multiple video types.
Poster on ‘Collaborative Computer Vision R&D at Kitware’
As part of the Vision Industry and Entrepreneur Workshop (VIEW), Kitware will present a poster that highlights the company’s computer vision capabilities with an accompanying demonstration on wide area motion imagery (WAMI) and full motion video (FMV) capabilities.
Paper by Eran Swears on ‘Complex Activity Recognition using Granger Constrained DBN (GDBN) in Sport and Surveillance Video’
The 2014 poster session was very competitive, with an acceptance rate of only 28%, representing the best research in the industry. Eran Swears, a Computer Vision R&D Engineer at Kitware, will present a poster titled ‘Complex Activity Recognition using Granger Constrained DBN (GDBN) in Sports and Surveillance Video.’ Eran’s poster is based on a collaborative paper between Kitware and Rensselaer Polytechnic Institute’s Professor Boyer.
Paper Presented by Eran Swears on ‘Pyramid Coding for Funtional Scene Element Recognition in Video Scenes’
Eran will present his ICCV 2013 paper as part of the ‘Scene Understanding Workshop (SUNw)‘ on June 23rd highlighting Kitware’s research on ‘Pyramid Coding for Function Scene Element Recognition in Video Scenes’.
ABSTRACTS
‘Complex Activity Recognition using Granger Constrained DBN (GDBN) in Sports and Surveillance Video’
Modeling interactions of multiple co-occurring objects in a complex activity is becoming increasingly popular in the video domain. The Dynamic Bayesian Network (DBN) has been applied to this problem in the past due to its natural ability to statistically capture complex temporal dependencies. However, standard DBN structure learning algorithms are generatively learned, require manual structure definitions, and/or are computationally complex or restrictive. We propose a novel structure learning solution that fuses the Granger Causality statistic, a direct measure of temporal dependence, with the Adaboost feature selection algorithm to automatically constrain the temporal links of a DBN in a discriminative manner. This approach enables us to completely define the DBN structure prior to parameter learning, which reduces computational complexity in addition to providing a more descriptive structure. We refer to this modeling approach as the Granger Constraints DBN (GCDBN). Our experiments show how the GCDBN outperforms two of the most relevant state-of-the-art graphical models in complex activity classification on handball video data, surveillance data, and synthetic data.
‘Pyramid Coding for Functional Scene Element Recognition in video Scenes’
Recognizing functional scene elements in video scenes based on the behaviors of moving objects that interact with them is an emerging problem of interest. Existing approaches have a limited ability to characterize elements such as cross-walks, intersections, and buildings that have low activity, are multi-modal, or have indirect evidence. Our approach recognizes the low activity and multi-model elements (crosswalks/intersections) by introducing a hierarchy of descriptive clusters to form a pyramid of codebooks that is sparse in the number of clusters and dense in content. The incorporation of local behavioral context such as person-enter-building and vehicle-parking nearby enables the detection of elements that do not have direct motion-based evidence, e.g. buildings. These two contributions significantly improve scene element recognition when compared against three state-of-the-art approaches. Results are shown on typical ground level surveillance video and for the first time on the more complex Wide Area Motion Imagery.
Physical Event