GEOINT Symposium 2026

April 15, 2026
USGIF GEOINT Symposium 2026

May 3–6, 2026 | Aurora, Colorado | Booth #2230

At the USGIF GEOINT Symposium 2026, the leading event for the geospatial intelligence community, Kitware will present its latest advancements in AI test and evaluation (T&E), computer vision, and interactive visualization. Our work supports national security missions by enabling organizations to better analyze complex data and make informed decisions with confidence.

We invite you to connect with our team at Booth #2230 and join our training sessions and lightning talks to see how our open source technologies and applied research are addressing today’s most pressing GEOINT challenges.

Advancing AI Test & Evaluation for Geospatial Applications

Kitware is advancing the development and deployment of AI systems through a strong focus on test and evaluation. Our approach spans the full lifecycle of AI, from data preparation and model development to rigorous evaluation and operational transition, ensuring systems are reliable, effective, and aligned with mission requirements.

As part of this effort, we are contributing to DARPA’s In the Moment (ITM) program, where we design AI systems that align with human decision-making processes in complex environments. By prioritizing transparency and alignment with human-defined criteria, these systems provide more interpretable and actionable outputs.

For large-scale geospatial workflows, Kitware offers open source platforms such as GeoWATCH and RDWATCH, which enable users to train, evaluate, and deploy AI on satellite imagery through intuitive, web-based tools. These platforms are built to integrate into existing pipelines and support efficient analysis at scale.

We also emphasize responsible and explainable AI, recognizing its importance in operational settings. Our XAITK Toolkit helps users understand how models arrive at their decisions, providing tools for evaluation, visualization, and explanation that strengthen trust and improve human-machine collaboration.

In addition, Kitware’s 3D vision technologies, including our open source TeleSculptor platform, convert aerial imagery and video into detailed 3D models using structure-from-motion techniques. These capabilities support mapping, object detection, and situational awareness—even in environments where metadata is incomplete or unavailable.

Kitware develops its technologies in close collaboration with government and industry partners, delivering open source solutions that emphasize transparency, interoperability, and long-term impact. Visit Booth #2230 to experience these capabilities firsthand and connect with our team.

Kitware Training Sessions and Lightning Talks

Detecting and Segmenting Anything You Can Put into Words with VLMS

Training Session | Monday, May 4 from 7:30-8:30 AM
Presenter: Matt Leotta, Ph.D.

Vision-Language Models (VLMs) let you find and segment objects in large imagery datasets just by describing them in natural language, avoiding the need for costly data labeling and retraining. This session explains how these models work and how they can be applied to geospatial tasks like object detection, segmentation, and even 3D analysis.

Event-Based Sensing: What Is It and How Is It Relevant to GEOINT?

Training Session | Tuesday, May 5 from 2:00 – 3:00 PM
Presenter: Scott McCloskey, Ph.D.

Event-Based Sensing (EBS) is a new imaging approach where sensors capture only changes in brightness instead of full frames, enabling extremely fast, efficient, and high-dynamic-range data collection. This session introduces how EBS works and how it can be used in geospatial applications like tracking fast-moving objects and identifying vehicles using AI-driven analysis.

Alignment of AI with Decision-Making Attributes of GEOINT Experts

Training Session | Wednesday, May 6 from 7:30-8:30 AM
Presenter: Arslan Basharat, Ph.D.

This session explores how Large Language Models can be adapted and aligned to match the specialized reasoning of GEOINT analysts, improving their usefulness in real-world decision-making. It also demonstrates techniques like fine-tuning and prompt training, along with multimodal AI applications, to enhance geospatial analysis workflows.

Few-shot Building Damage Assessment

Lightning Talk | Monday, May 4th from 3:50-3:55 pm
Authors/Presenters: Dennis Melamed, Trevor Stout, and Cameron Johnson

AI-based damage assessment models perform well with large labeled datasets but struggle to adapt quickly to new disasters where labeled data is scarce. This work presents a label-efficient approach that uses pretraining to extract general features, enabling accurate damage classification with as few as 100 labeled samples. The result is faster, more scalable damage assessment with significantly reduced labeling effort, accelerating the delivery of actionable intelligence.

Understanding Sensor-based Robustness of Object Detection Models for Overhead Imagery

Lightning Talk | Monday, May 4th from 3:45-3:50 pm
Author/Presenters: Anthony Hoogs, Ph.D.

AI object detection models for overhead imagery often struggle when deployed under sensor conditions different from their training data. This work uses the Natural Robustness Toolkit (NRTK) to simulate varied sensor parameters and systematically evaluate how these changes impact model performance. The results provide insight into model sensitivity and robustness, helping guide better training, evaluation, and deployment strategies.

Formal Guarantees of AI Model Robustness for GEOINT Applications

Lightning Talk | Tuesday, May 5th from 3:05-3:10 pm
Author/Presenter: Anthony Hoogs, Ph.D.

MAGNET is an open source toolkit developed under DARPA’s AIQ program to evaluate and improve the reliability and generalization of AI models in real-world deployments. It provides a flexible framework for testing models across text, image, and multimodal tasks using structured evaluations and performance prediction methods. The goal is to help identify model limitations before deployment, supporting more robust and trustworthy AI systems for applications like GEOINT.

Label What Matters: Open-Vocabulary 3D Semantic Segmentation for GEOINT

Lightning Talk | Tuesday, May 5th from 2:10-2:15 pm
Author/Presenter: Matt Leotta, Ph.D

3D models from UAS and satellite imagery are valuable for GEOINT but lack semantic labels, limiting their usefulness. GU3SS addresses this by using vision-language models to enable open-vocabulary 3D segmentation, allowing analysts to define targets with simple text prompts. This flexible approach reduces retraining needs and improves adaptability across changing missions and environments.

Computer Vision and AI at Kitware

Kitware is a recognized leader in developing advanced artificial intelligence and computer vision solutions for mission-critical applications. We build systems that enable organizations to analyze imagery, video, and multimodal data at scale, with a focus on performance, transparency, and real-world deployment.

Our work spans a wide range of technical areas, including:

  • AI test and evaluation for geospatial and mission systems
  • Human-aligned AI and decision support
  • Responsible and trustworthy AI
  • Geospatial analytics, remote sensing, and 3D reconstruction
  • Object detection, classification, and tracking
  • Multimedia integrity and activity detection
  • Open source platforms for operational AI deployment

We bring deep expertise across the full AI lifecycle, from data curation and model development to evaluation and transition, ensuring systems are robust, reliable, and aligned with mission needs in complex environments.

Working in close collaboration with government agencies, industry partners, and academic institutions, we deliver solutions that support a wide range of operational domains. Our technologies are designed to adapt to evolving challenges and provide lasting value across diverse mission areas. Contact our team to learn more about how we can partner with you.

Leave a Reply