WACV 2021: Winter Conference on Applications of Computer Vision – Kitware Inc WACV 2021: Winter Conference on Applications of Computer Vision

Kitware is proud to be a Silver sponsor of the 2021 Winter Conference on Applications of Computer Vision (WACV). This conference is being held virtually from Tuesday, January 5 through Saturday, January 9.

WACV is known as the Institute of Electrical and Electronics Engineers (IEEE) and the Technical Committee on Pattern Analysis and Machine Intelligence’s (PAMI-TC) premier meeting on applications of computer vision. This conference promotes collaboration, research and development, and insight into computer vision applications and technology through workshops, tutorials, and main sessions. The event provides in-depth information on cutting edge research advances in applications of computer vision technology. WACV attendees include academia, industry, and government representatives.

We are happy to be supporting this community as we continue to collaborate and engage through virtual channels. We also hope to recruit talented computer vision professionals to join our team here at Kitware.

This year, we will be presenting a paper at the main conference in addition to a workshop.


Workshop: Third International Workshop on Human Activity Detection in Multi-camera, Continuous, Long-duration Video

Tuesday, January 5, 2021, from 11 am – 3:15 pm ET (Virtual)

When humans interact with machines and other agents, advanced pattern recognition techniques are predominately used to interpret the complex behavioral patterns. Computer vision is key to the analysis and synthesis of human behavior. Still, it stands to gain much from multimodality and multi-source processing to improve accuracy, resource use, robustness, and contextualization.

This workshop is for researchers looking to model human behavior under its multiple facets, with particular attention to multi-source aspects, including multi-sensor, multi-participant, and multimodal settings. The diversity of human behavior, the richness of multi-modal data that arises from its analysis, and the multitude of applications that demand rapid progress in this area make this workshop a timely and relevant discussion and dissemination platform.

This workshop was organized by Afzal Godil, Jonathan Fiscus, Yooyoung Lee, Anthony Hoogs, and Reuven Meth.


Paper Presentation: A Large-Scale Multiview, Multimodal Video Dataset for Activity Detection

Wednesday, January 6, 2021, between 9:45 – 11 pm ET Time Slot (Virtual)

Authors

Kellie Corona, Katie Osterdahl, Roddy Collins, Anthony Hoogs

Abstract

We present MEVA, a new and very-large-scale dataset for human activity recognition. Existing surveillance datasets either focus on activity counts by aggregating public video disseminated due to its content, which typically excludes same-scene background video, or they achieve persistence by observing public areas and thus cannot control for activity content. Our dataset is over 9300 hours of untrimmed, continuous video, scripted to include diverse, simultaneous activities, along with spontaneous background activity. We have annotated 144 hours for 37 activity types, marking bounding boxes of actors and props.

Our collection observed approximately 100 actors performing scripted scenarios and spontaneous background activity over a three-week period at an access-controlled venue, collecting in multiple modalities with overlapping and non-overlapping indoor and outdoor viewpoints. The resulting data includes video from 38 RGB and thermal IR cameras, 42 hours of UAV footage, as well as GPS locations for the actors. 122 hours of annotation are sequestered in support of the NIST Activity in Extended Video (ActEV) challenge; the other 22 hours are available on our website, along with 328 hours of ground camera data, 4.6 hours of UAV data, and 9.6 hours of GPS logs.

Additional derived data includes camera models geo-registering the outdoor cameras and a dense 3D point cloud model of the outdoor scene. The data was collected with IRB oversight and approval and released under a CC-BY-4.0 license.

Physical Event

Virtual