Anthony Hoogs is co-organizing a workshop for CVPR 2011, the Workshop on the Activity Recognition Competition. Held on Monday, June 20, this workshop brings together leading researchers in the area of video activity analysis to discuss current challenges in the field. The VIRAT Video Dataset for action recognition is presented and initial results obtained using it are reported.
Luis Ibáñez, Matt Leotta, Amitha Perera and Patrick Reynolds will be teaching the tutorial ‘ITK meets OpenCV: A New Open Source Software Resource for CV’ on June 20th.
On Wednesday, June 22 Kitware will also be presenting a paper ‘A Large-scale Benchmark Dataset for Event Recognition in Surveillance Video’ authored by Sangmin Oh, Anthony Hoogs, Amitha Perera, Naresh Cuntoor, Chia-Chih Chen, Jong Taek Lee, Saurajit Mukherjee, J.K. Aggarwal, Hyungtae Lee, Larry Davis, Eran Swears, Xiaoyang Wang, Qiang Ji, Kishore Reddy, Mubarak Shah, Carl Vondrick, Hamed Pirsiavash, Deva Ramanan, Jenny Yuen, Antonio Torralba, Bi Song, Anesco Fong, Amit Roy-Chowdhury, and Mita Desai.
We introduce a new large-scale video dataset designed to assess the performance of diverse visual event recognition algorithms with a focus on continuous visual event recognition (CVER) in outdoor areas with wide coverage. Previous datasets for action recognition are unrealistic for real-world surveillance because they consist of short clips showing one action by one individual. Datasets have been developed for movies and sports , but, these actions and scene conditions do not apply effectively to surveillance videos. Our dataset consists of many outdoor scenes with actions occurring naturally by non-actors in continuously captured videos of the real world. The dataset includes large numbers of instances for 23 event types distributed throughout 29 hours of video. This data is accompanied by detailed annotations which include both moving object tracks and event examples, which will provide solid basis for large-scale evaluation. Additionally, we propose different types of evaluation modes for visual recognition tasks and evaluation metrics along with our preliminary experimental results. We believe that this dataset will stimulate diverse aspects of computer vision research and help us to advance the CVER tasks in the years ahead.