Dr. Rusty Blue will present a briefing on ‘Fusing 3D Point Data and Video from Mobile Devices for GPS-Denied Navigation’ in the Session: ‘Active Systems for Surveillance and Reconnaissance’, August 27 2014.  Dr. Blue will go into various Kitware programs that tie into a solution for the soldier on the ground in a GPS-denied environment.  Feel free to reach out at computervision@kitware.com, to set up a meeting or discussion.  We look forward to seeing you there!

ABSTRACT: ‘Fusing 3D Point Data and Video from Mobile Devices for GPS-Denied Navigation’

Lidar sensors such as HALOE have been very successful in standoff applications such as terrain extraction, change detection and target detection. Simultaneously, commercial development of near-range, low-cost structured light devices such as Kinect are enabling new, innovative applications for hand-held and ground-level scenarios. Recently, Google has introduced the Project Tango Development Kit, a mobile phone equipped with frame-rate 3D point data collection via structured light and simultaneous video. We expect the Tango and similar devices to enable a new breed of mobile applications for soldiers in the field, fusing local 3D point data with video for improved accuracy in object detection, recognition, GPS-denied navigation and other applications.

Kitware has developed novel segmentation, detection and visualization capabilities that apply to both standoff and close-range point data domains, including the Tango for mobile, hand-held use in the field. Pose information from the collection device can be used to combine multiple point data images into a detailed 3D map, applying fine registration algorithms as required. Dominant planar regions are identified, and objects are detected as local outliers. Detectors for specific object types such as windows or street signs can be applied for detailed scene content.

In addition, we fuse video with 3D point data to exploit the complementary advantages of both. In video, we have developed a novel, space-time volumetric segmentation method that partitions the scene into regions with distinct appearance. 3D point segmentation partitions the scene into regions with distinct geometry. Fusing the two provides a semantically meaningful segmentation where co-planar objects with different appearance can be distinguished, while geometrically complex objects with consistent visual texture, such as trees, can be segmented as single objects. We have also developed methods for GPS-denied navigation through identification and cataloguing of visual landmarks, which are detected on subsequent visits to the same location. With co-collected 3D point data, these landmarks can be mapped onto 3D models for improved position estimation.

For user visualization and exploitation of 3D point data and video, we have developed tools within our ParaView system, an open-source visualization platform that is widely used around the world for 3D and higher-dimensional data analysis. Users can explore the fused point clouds, overlay images and video on the point clouds, create surface mesh models, visualize the sensor location at each imaging event, and compare point clouds or meshes taken at different times.

Kitware is a partner with Google on its Tango project, and we have integrated its 3D point and video data into our algorithm and visualization pipeline. The use of hand-held devices is growing very rapidly within the military, and we envision that GPS-denied navigation for soldiers and robots will be enabled by the Tango coupled with onboard processing and visualization.

Authors: Dr. Rusty Blue, Dr. Anthony Hoogs, Xiliang Xu, Casey Goodlet, Heather James, Bill Hoffman, David Thompson, Eric Smith, Brad Davis

Physical Event

X