This year’s 2017 Joint Meeting of the Military Sensing Symposium (MSS) Specialty Groups on Battlespace Acoustic, Seismic, Magnetic, and Electric-Field Sensing and Signatures (BAMS) and the National Symposium on Sensor Data & Fusion (NSSDF) will be held October 31 through November 2 in Springfield, VA. This notable Department of Defense (DoD) conference, sponsored by Army RDECOM ARMDEC, Army RDECOM CERDEC, Army Research Laboratory (ARL), Air Force Research Laboratory (AFRL), Office of Naval Research (ONR), and the Office of the Undersecretary of Defense will be held at the National Geospatial Intelligence Agency (NGA) and will include presentations, poster sessions, and technology specific sessions discussing critical technologies being developed and implemented for the DoD.
Kitware, a Bronze sponsor for this event, will be in attendance and has been selected to provide four technical presentations in the NSSDF meetings on October 31 and November 1. NSSDF will focus on areas such as cyber situational awareness and situational understanding, methods for improved target tracking, new methods in data fusion, and disruptive technologies. Kitware’s presentation titled “Fusing Visible and Infrared (IR) Video on Mobile Robots, UAVs and Warfighters for Real-Time, Squad-Level Situational Awareness” will be briefed on October 31 under Session N1D: Fusion. A Kitware Computer Vision team member will discuss Kitware’s ongoing technology development to improve Intelligence, Surveillance, and Reconnaissance (ISR) capabilities of Army and Marine rifle squads, sponsored by the DARPA Squad-X Core Technologies program. Kitware and its collaborators, the University of Maryland and the University of Pennsylvania, are developing the THreat Reconnaissance and Exploitation from Audio-Video Target eXtraction (THREAT X) system, which focuses on utilizing unmanned systems, and fusing various sensors (Infrared (IR) and RGB) to build automated, intelligence video processing and alert mechanisms supporting the soldier on the ground without added burden. Kitware has performed multiple data collections over extended periods of time in order to successfully build deep learning computer vision techniques to address person detection and threat classification in battlefield scenarios. A quantified assessment of this system will be included. Authors contributing to this project and this paper include Anthony Hoogs, Ph.D., Keith Fieldhouse, and Eran Swears, Ph.D.; all part of Kitware’s Computer Vision Team.
November 1, under Session N2B: Disruptive, Game-Changing Systemic and Technological Concepts, Kitware will brief Deep Learning for Object Detection and Object-based Change Detection in Satellite Imagery. This presentation will discuss some of Kitware’s technological contributions to the Visual Global Intelligence and Analytics Toolkit (VIGILANT) program, funded by the Air Force. Kitware is building an open, extensible software framework and demonstration system that will provide cutting-edge automated change detection, tipping, and cueing mechanisms for exploitation of commercial satellite imagery and video. The system will integrate existing state-of-the-art algorithms from Kitware and the Rochester Institute of Technology (RIT) for satellite image registration, object detection, and object-based change detection incorporating the latest deep learning techniques to enable rapid adaptation to new changes, objects and/or sensor characteristics using the DIRSIG simulation tool. VIGILANT is built on Kitware-developed open source frameworks for distributed processing and management of heterogeneous big data sources and leverages Kitware’s geographic visualization and analytics capabilities packaged into a modern, browser-based dynamic user interface. Authors contributing to this project and this paper include Charles Law, Ph.D., Jason Parham, Ph.D., Matt Dawkins, Paul Tunison, David Stoup, Rusty Blue, Ph.D.,Keith Feildhouse, Matt Turek, Ph.D., Anthony Hoogs, Ph.D., all from Kitware; S. Han, A. Farafard, J. Kerekes, E. Lentilucci, M. Gartley, A Savakis, T. Rovito, all from RIT; and S. Thomas and C. Stansifer from the Air Force Research Lab (AFRL).
Under Session N2C: Emerging Threats and Challenge Domains, Kitware will present on Automatic Pattern of Life Learning in Satellite Images through Graph Kernels. This presentation will provide specific details and methods that are successfully being used to learn typical observed patterns in satellite images such as types and density of vehicles at a specific time of day. Learning patterns can be very challenging due to issues related to different imaging conditions such as resolution, view angle, and sensor type modalities, like Electro-Optical (EO), Infrared (IR), and Hyperspectral Imagery (HSI). To address these, Kitware will brief a new, robust, and scalable approach based on graph modeling and rapid comparison through graph kernels to identify typical patterns in large satellite images. This approach builds upon the contents extracted automatically from the satellite images through deep learning models for object detection and semantic scene segmentation. Authors include John Moeller, Ph.D., Eric Smith, Ph.D., Arslan Basharat, Ph.D., Matt Turek, Ph.D., Anthony Hoogs, Ph.D., all from Kitware, and Erik Blasch from the Air Force Office of Scientific Research (AFOSR).
SegNet deep learning model for semantic segmentation has been extended for satellite images.
Finally, Under Session N2C, Kitware will brief on Using Convolutional Neural Networks (CNNs) for Content-Based Full Motion Video (FMV) Retrieval. The amount of FMV data available to analysts due to the rapid growth in development and use of Unmanned Aerial Systems (UAS) has dramatically increased. This poses many problems as automated analytic capabilities have not kept pace and there are very few tools available for alerting or archive search due to challenges related to low resolution, appearance variability, complex scenes, shadows, and other factors. Deep learning shows great promise in automating the fundamental tasks of FMV, including object recognition, object-based retrieval, and image search. Kitware will discuss Hierarchical Dynamic Video Exploitation (HiDyVE), which uses CNNs, a form of deep learning for imagery, to detect and describe objects in FMV streams and archives. The CNNs are used for automatic target recognition when training data per object type is available, and for video search. This system will be detailed and demonstrated on FMV data in order to show the audience how deep learning improves these capabilities vs. a traditional, non-deep learning approach. Authors include Matt Dawkins, Roddy Collins, Ph.D., and Anthony Hoogs, Ph.D., all from Kitware’s Computer Vision team.
Please reach out to computervision@kitware.com to set up a discussion with Matt Turek, Keith Fieldhouse, and Rusty Blue during this meeting in order to discuss Kitware’s capabilities and technology contributions to the DoD.
Physical Event
Springfield