Heather James will be presenting a briefing on ‘Open Source Computer Vision and Analytics to Support and Grow the UAS and ASPRS Communities’ in the ‘Data Processing’ session October 22 2014. Please reach out at computervision@kitware.com to set up meetings while in Reno. We are looking forward to it!
Abstract: Open Source Computer Vision and Analytics to Support and Grow the UAS and ASPRS Communities
Rapid progress in computational sciences is propelled by unfettered access to data, and the unrestricted sharing of computational tools, typically implemented in software platforms. The open science movement empowers researchers in industry, government and academia through the free flow of open data, open source software, and open access publication that remove all friction and delays from the process of sharing findings across institutions and across geographically distributed teams.
From the technical point of view, Open Source is the application of the scientific method to the practice of Software Engineering, while from the economic point of view it builds upon the peer-production economic model that reduces the code of producing high quality shareable research tools, reducing the time and resources that are invested in creating the tools required to process and analyze field data. Adoption of open source practices will accelerate the pace of progress in the photogrammetry field, by bringing to a common platform the tools developed by many individual groups, so that they can be immediately used and improved upon by others, regardless of institutional affiliations or geographical location. This rapid sharing of resources also enables the practice of reproducibility verification that is at the core of scientific method practice, and through which the findings that are reported in technical and scientific forums can be independently verified to the benefit of the entire field.
The Computer Vision domain, applies models of human vision to extract knowledge out of multiple modalities of imagery, particularly video, and commonly from multi-spectral images, as well as combined with 3d sensing capabilities such as LIDAR and structured light. Computer vision systems will typically proceed by extracting basic features from the raw sensor data, and then applying machine learning techniques to identify patterns in large collections of data features that reveal correlations in the data. The resulting findings are then applied to perform quantitative evaluation of field conditions, as well as feed back into the data acquisition process itself in order to improve subsequent analysis.
Integration of Computer Vision into UAS provides automation and improves the exploitation of the data acquired by the systems, increasing the efficiency with which data is analyzed and increases the amount of actionable information extracted from the input data. CV differs from photogrammetry, in the lesser degree of focus from CV on the precision of measurements and the direct calibration of devices, while dedicating a higher level of analysis and interpretation to generating knowledge out of the directly measured data. CV provides complementary capabilities by bringing in analysis based on temporal dynamics of the data, as well as generating conceptual models describing the situation on the ground.
CV methodologies that will be critical for photogrammetry and remote sensing applications include image alignment and registration, 3D from Video, change detection, feature extraction, and content retrieval, to name a few. These methodologies will contribute to a variety of disciplines from Urban Planning/Development to Agriculture/Soil Assessments and Environmental Resource and Disaster Management. The UAS community will benefit from a combination of these methods and capabilities to better analyze, identify, and deliver more accurate information to make intelligent decisions focused on unmanned airborne remote sensing across the scientific mapping community.
Physical Event