Kitware will continue their active participation and support to the computer vision community within this year’s IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). The computer vision team has had a long-term commitment to CVPR and this year will be no different. We will discuss our extensive experience in utilizing deep learning and computer vision to develop tools for image and video scene recognition, object detection and tracking, and image enhancement in talks at multiple workshops. Domains will include aerial video, ground-level video, underwater video and satellite imagery. Throughout CVPR 2019, the vision team will continue their high-level of participation as silver sponsors, area chair, reviewers, workshop organizers, speakers, and more. Come by our booth (#1118) to share and discuss Kitware’s computer vision and deep learning techniques supporting the open source software community. Be sure to check out the full details below so you don’t miss this great opportunity to meet with our team.
Kitware is a silver sponsor at this year’s CVPR. Each year, Kitware sponsors this conference to support this community, recruit computer vision professionals, and to continue collaboration and engagement. Anthony Hoogs, Ph.D., the Senior Director of Computer Vision, is an Area Chair providing his extensive expertise in computer vision and deep learning to ensure selection of the highest quality papers.
Sunday, June 16 will kick-off the beginning of Kitware’s participation starting with the CVPR-19 Workshop on Explainable Artificial Intelligence (AI). This workshop “aims to bring together researchers, engineers and industrial practitioners” to discuss relevant topics and concerns related to “interpretability, safety, and reliability of AI”. Dr. Anthony Hoogs will present detailed information on our accepted paper “Explainability for Content-Based Image Retrieval”. This will be based on our experience performing research and development (R&D) for funded research programs focused on explaining AI. It will be a half-day event located at the Hyatt, Beacon, A; however please check the agenda to ensure no locations have changed at the start of the conference.
Monday June 17, Dr. Anthony Hoogs, the General Co-Chair and organizer of the CVPR 2019 Workshop and Challenge on Automated Analysis of Marine Video for Environmental Monitoring will begin. This full-day workshop includes“experts and researchers interested in learning about the challenges, current work, and opportunities in marine video analytics, including video captured from both under and above water”. Monitoring fish, shellfish, marine mammals, coral reefs, bottom habitats, and other wildlife continues to be of great interest to oceanographers, biologists, and others. Maritime video collection is expanding exponentially as professionals work to determine new ways to use this data for marine science and oceanography. The challenges in this environment are vast and many experts and researchers are diligently working to overcome these challenges to enable increased automated video processing. Currently, marine video is annotated manually, resulting in a tremendous bottleneck to studying marine biological problems such as fish population estimates. Under NOAA funding, Kitware has developed an open-source framework and toolkit, Video and Imagery Analytics for the Marine Environment (VIAME), to integrate analytics algorithms from the community into an operational capability for NOAA.
This Workshop will feature a data challenge within this domain, in addition to the invited talks and selected papers. Large amounts of image data from a variety of underwater cameras and environments has been provided and annotated by the National Oceanic and Atmospheric Administration (NOAA). Challenge entries must work towards achieving the highest detection and classification accuracy of organisms in the provided videos. Results will be scored and ranked by Kitware, and the top three entries will present their solutions at the workshop. Check out the Challenge website for more information.
Kitware will also be presenting and participating in the EARTHVISION 2019 IEEE/ISPRS Workshop June 17. The need for automated interpretation of remotely sensed data is always present to positively support human society, economy, industry, and the planet. It is evolving and highly challenging, therefore awareness within the computer vision community needs to be raised to support and develop tools to improve data mining, multi-sensor, multi-modality, multi-temporal, and multi-resolution fusion and more. This workshop focuses on large scale computer vision for remote sensing in imagery and is a full-day event with invited talks, poster spotlights and presentations. This year, Kitware was awarded the “Best Paper Presentation’ for this workshop titled “Urban Semantic 3D Reconstruction from Multiview Satellite Imagery”. Matt Leotta, Ph.D., a Technical Lead within Kitware’s Computer Vision team, will be presenting this research to the community; with additional Kitware attendees and authors, such as Chengjiang Long, Ph.D., a Computer Vision Researcher with Kitware.
The Workshop on Media Forensics will also be held on Monday, June 17. This workshop will discuss topics such as media forensics, reconstruction of media genealogy, analysis and detection of imagery and videos created by new synthesis methods such as Generative Adversarial Networks (GANs) and more. Kitware’s, Dr. Chengjiang Long will present a spotlight presentation starting at 9:30 am PST on the paper titled “A Coarse-to-fine Deep Convolutional Neural Network Framework for Frame Duplication Detection and Localization in Forged Videos”. It proposes a novel coarse-to-fine framework based on deep Convolutional Neural Networks to automatically detect and localize frame duplication. In addition to this, keynote speakers for universities and government organizations will discuss research, challenges and real-world needs based on media forensics now. Make sure to come check out this workshop in room 102A and the others at CVPR 2019!
Finally, during the main conference and exhibition, Kitware’s computer vision team members including Anthony Hoogs, Ph.D., Keith Fieldhouse, Matt Leotta, Ph.D., Brian Clipp, Ph.D., Matt Dawkins, Chengjiang Long, Ph.D., Heather James, and John Westbrook will be available at booth #1118 to discuss and demonstrate Kitware’s computer vision expertise. You will have the opportunity to see the KitWare Image and Video Exploitation and Retrieval (KWIVER) toolkit, Kitware’s open source software toolkit that contains advanced computer vision tools to perform object detection and tracking; activity, event, and threat detection; scene understanding; and social multimedia analysis. In addition, our work on the Video and Image Analytics for a Marine Environment (VIAME) toolkit will be demonstrated as we are continuously working with NOAA to develop open source software for fish monitoring and extending that to other capabilities and needs.
About Kitware’s Computer Vision Team:
Kitware’s Computer Vision team recognizes how valuable advancing computer vision and deep learning is in order to greatly improve and push capabilities beyond their limits supporting Academia, Industry, and the DoD and Intelligence Communities. Our main focus areas include deep learning, object detection and tracking, image and video scene understand, image and video forensics, social multimedia analysis, complex activity, event, and threat detection, 3D vision, and super resolution; however we are not limited and continuously explore and participate in other research and development for our customers and our partners. We partner with many academic institutions such as Harvard, Massachusetts Institute of Technology, Cornell University, University of California, Berkeley, and Texas A&M University. We have worked with various agencies, such as the Defense Advanced Research Project Agency (DARPA), Air Force Research Laboratory (AFRL), the Office of Naval Research (ONR), Intelligence Advanced Research Projects Activity (IARPA) and the U.S. Air Force. Kitware has developed and deployed an operational Wide Area Motion Imagery (WAMI) tracking systems for Intelligence, Surveillance, and Reconnaissance (ISR) in theatre, providing analysts with exploitation capabilities that fuse sensors, platforms, and people. Our work with DARPA on Squad-X has led to extensive research, development, and deployment of robust methods to more accurately identify and track objects and people, delivered straight to the soldier on the ground. Our work on the Visual Global Intelligence and Analytics Toolkit (VIGILANT), funded by the Air Force Research Laboratory (AFRL) via the Small Business Innovation Research (SBIR) program is a collaborative effort with Rochester Institute of Technology’s (RIT’s) Digital Imaging and Remote Sensing (DIRS) and Real Time Vision and Image Processing (RTVIP) Labs and the Middlebury Institute of International Studies at Monterey, James Martin Center for Nonproliferation Studies to develop object and object-based change detection and unstructured change detection in satellite imagery. In addition, Kitware is continually improving their KitWare Image and Video Exploitation and Retrieval (KWIVER) toolkit, which is an open source framework for video and image analytics built from Kitware’s years of experience developing analytic systems for various customers in multiple domains. Please visit our computer vision and KWIVER webpages for more information into our key focus areas and experience.
Please reach out to firstname.lastname@example.org to schedule meetings throughout this event. We are looking forward to engaging with this community and sharing information on Kitware’s ongoing research and capability development in computer vision and deep learning as well as our cutting-edge open source vision software, KWIVER and VIAME.