3D Reconstruction, Point Clouds, and Odometry
Kitware’s algorithms can extract 3D point clouds and surface meshes from video or images, without metadata or calibration information, or exploiting it when available. Operating on these 3D datasets or others from LiDAR and other depth sensors, our methods estimate scene semantics and 3D reconstruction jointly to maximize the accuracy of object classification, visual odometry, and 3D shape. Our open source 3D reconstruction toolkit, Telesculptor, is continuously evolving to incorporate advancements to automatically analyze, visualize, and make measurements from images and video. LiDARView, another open source toolkit developed specifically for LiDAR data, performs 3D point cloud visualization and analysis in order to fuse data, techniques, and algorithms to produce SLAM and other capabilities.
3D Slicer-Based Applications
We create custom plugins, SDKs, applications, and software packages using 3D Slicer. 3D Slicer has been used in a variety of medical and basic scientific applications such as dentistry, radiation oncology, surgical planning, and drug development. These custom software applications can be deployed to local hardware or to remote servers using Docker or on tablets. To support reproducible workflows, they can also be integrated with our Girder data management solution or with Jupyter notebooks.
Our solutions connect cutting-edge bioinformatics research to a solid, extensible platform, where open science is front and center. We develop data management, analysis management, and visualization applications for a wide range of areas including genomics, metabolomics, and phylogenetics. From browsing and analyzing large histology slides on the web, to stabilizing and sharing new ‘omics algorithms through a robust web application, we have data and analytics covered.
Complex Activity, Event, and Threat Detection
Kitware’s tools recognize high-value events, salient behaviors and anomalies, complex activities, and threats through the interaction and fusion of low-level actions and events in dense cluttered environments. Operating on tracks from WAMI, FMV, MTI or other sources, these algorithms characterize, model, and detect actions, such as people picking up objects and vehicles starting/stopping, along with complex threat indicators such as people transferring between vehicles and multiple vehicles meeting. Many of our tools feature alerts for behavior, activities and events of interest, including highly efficient search through huge data volumes, such as full frame WAMI missions using approximate matching. This allows you to identify actions in massive video streams and archives to detect threats, despite missing data, detection errors and deception.
Computational Physiological Modeling
Our open source Pulse Physiology Suite features a well-validated and documented computational physiology engine for real time simulations of the body’s response to trauma, disease, and treatment. Pulse Physiology Explorer is an extendable user interface for quick exploration and experimentation with the Pulse Physiology Engine.
Cross-platform Interactive Applications
We work on a variety of cross- and multi-platform applications from desktop, to server, to mobile, to cloud, to web. The focus of these applications includes distributed 2D and 3D ultrasound, augmented reality, manual and semi-automatic segmentation and registration, quality control workflows, and surgical robotics.
At their core, these applications are built on our technologies and expertise in image processing, segmentation, registration, and surgical guidance. We work directly with customers to design workflows, user experiences, and custom interfaces from the ground up. Our development, testing, and documentation practices are aligned with FDA requirements and HIPAA technical safeguards for software products.
The physical environment presents a unique, ever-changing set of challenges to any sensing and analytics system. Kitware has designed and constructed state-of-the-art cyber-physical systems that perform onboard, autonomous processing to gather data and extract critical information. Computer vision and deep learning technology allow our sensing and analytics systems to overcome the challenges of a complex, dynamic environment. They are customized to solve real-world problems in aerial, ground, and underwater scenarios. These capabilities have been field-tested and proven successful in programs funded by R&D organizations such as DARPA, AFRL, and NOAA.
Dataset Collection and Annotation
The growth in deep learning has increased the demand for quality, labeled datasets needed to train models and algorithms. The power of these models and algorithms greatly depends on the quality of the training data available. Kitware has developed and cultivated dataset collection, annotation, and curation processes to build powerful capabilities that are unbiased and accurate, and not riddled with errors or false positives. Kitware can collect and source datasets and design custom annotation pipelines. We can annotate image, video, text and other data types using our in-house, professional annotators, some of whom have security clearances, or leverage third-party annotation resources when appropriate. Kitware also performs quality assurance that is driven by rigorous metrics to highlight when returns are diminishing. All of this data is managed by Kitware for continued use to the benefit of our customers, projects, and teams.
Through our extensive experience in AI and our early adoption of deep learning, we have made dramatic improvements in object detection, recognition, tracking, activity detection, semantic segmentation, and content-based retrieval. In doing so, we have addressed different customer domains with unique data, as well as operational challenges and needs. Our expertise focuses on hard visual problems, such as low resolution, very small training sets, rare objects, long-tailed class distributions, large data volumes, real-time processing, and onboard processing. Each of these requires us to creatively utilize and expand upon deep learning, which we apply to our other computer vision areas of focus to deliver more innovative solutions.
Dental, Craniomaxillofacial, and Musculoskeletal Image Analysis
Our projects and research aim to quantitatively explore how age, disease or treatment affect structures in the skeleton and craniomaxillofacial (CMF) complex. This improved knowledge can help diagnose disease early, plan and measure treatment, and monitor the progression of certain conditions. In particular, we are experts in morphometry analysis, a technique that can be used to quantitatively plan surgery or measure remodeling in the bones. Our musculoskeletal image analysis methods, for example, can quantify bone quality or tooth integrity. We also develop CMF-specific surgical trainers to improve procedural knowledge and surgical proficiency without sacrificing patient safety.
We are building a suite of open source web-based informatics tools that manage, visualize, and analyze massive and growing collections of data in digital pathology. The key solutions in the making include Digital Slide Archive (DSA), HistomicsTK, and Large-image. DSA is a web-based platform for the aggregation, management, and dissemination of large collections of whole-slide histopathology images, along with associated clinical and genomic metadata. HistomicsTK serves as both a web-based analytics platform and a standalone Python toolkit. It contains computer vision and machine learning algorithms for the quantitative analysis of whole-slide histopathology images and associated data. Large-image supports the web-based visualization and annotation of large multi-resolution whole-slide histopathology images. It also includes a Python API for reading/writing these images in a tiled fashion.
End-to-end Simulation workflows
We have expertise developing tools that address the simulation workflow from defining the proper geometric shape or mesh, to integrating simulation parameters, to queuing the simulation job, to visualizing and analyzing the simulation results. This approach to simulation workflow management breaks down the monolithic approach to HPC and simulation. Our approach is much more modular, and it can be easily tailored to specific simulations from hydrological to nuclear reactor simulations.
Explainable and Ethical AI
Integrating AI via human-machine teaming can greatly improve capabilities and competence as long as the team has a solid foundation of trust. To trust your AI partner, you must understand how the technology makes decisions and feel confident in those decisions. Kitware has developed powerful tools to explore, quantify, and monitor the behavior of deep learning systems. Our team is making deep neural networks explainable and robust when faced with previously-unknown conditions. In addition, our team is stepping outside of classic AI systems to address domain independent novelty identification, characterization, and adaptation to be able to acknowledge the introduction of unknowns. We also value the need to understand the ethical concerns, impacts, and risks of using AI. That’s why Kitware is developing methods to understand, formulate and test ethical reasoning algorithms for semi-autonomous applications.
Geospatial Data Systems and Visualization
We offer advanced capabilities for geospatial analysis and visualization. We support a range of use cases from analyzing geolocated Twitter traffic, to processing and viewing complex climate models, to working with large satellite imagery datasets. Our open source tools and expert staff provide full application solutions, linking raw datasets and geospatial analyses to custom web visualizations.
High Performance I/O and Scientific Workflows
We have expertise in developing and deploying high performance input/output (I/O) capabilities. I/O is one of the most pressing challenges with large-scale simulations and can be a major bottleneck when not done properly. I/O bottlenecks can often result in scientists discarding or not analyzing large amounts of their data. Kitware is one of the major partners of the Adaptable I/O Systems (ADIOS) framework used to address this challenge. The ADIOS framework delivers a highly optimized I/O and coupling infrastructure that enables efficient data exchanges to move data to the storage system and between multiple codes running concurrently through in situ and in transit processing. At its core, ADIOS is an I/O library built on a self-describing data mode and utilizes a publish subscribe mechanism. It is used to efficiently read and write large amounts of data to and from storage systems as well as to and from other codes. Together with our visualization infrastructure, VTK and ParaView, it enables efficient data analysis, visualization, code coupling and checkpoint/restart generation.
Image and Video Forensics
In this new age of disinformation, it has become critical to validate the integrity and veracity of images, video, and other sources. As photo-manipulation and photo generation techniques are evolving rapidly, we are continuously developing algorithms to automatically detect image and video manipulation that can operate at scale on large data archives. These advanced deep learning algorithms give us the ability to detect inserted, removed, or altered objects, distinguish deep fakes from real images, and identify deleted or inserted frames in videos in a way that exceeds human performance. We continue to extend this work through multiple government programs to detect manipulations in falsified media exploiting text, audio, images, and video.
Image Guided Intervention and Surgical Planning
We develop image-guided intervention and surgical planning applications that replace traditional surgery and invasive procedures with minimally invasive techniques that incorporate medical imaging to guide the intervention. Patients prefer these procedures to open surgeries because they are typically less traumatic to the body and result in faster recovery times. Technological advancements in medical imaging, registration algorithms, visualization technologies, and tracking systems are driving forces behind increased adoption of these procedures by physicians. Software is an integral part of these image-guided intervention systems. Whether it is for interfacing with a tracking device to collect position information from surgical instruments, integrating intraoperative and pre operative images, or generating a 3D visualization to provide visual feedback to the clinician, software has a critical role. The software platforms we are developing at Kitware are playing a major role in increasing the pace of research and discovery in image-guided intervention systems by promoting collaborations between clinicians, biomedical engineers, and software developers across the globe.
Input/output (I/O) is one of the most pressing challenges with large-scale simulations. It is already common for simulations to discard most of what they compute in order to minimize time spent on I/O. As scientific computing moves to the exascale, the disparity between computational capability and I/O capability continues to expand. Since storing data is no longer viable for many simulation applications, data analysis and visualization must now be performed in situ. ParaView Catalyst is a light-weight version of the ParaView server library that is designed to be directly embedded into parallel simulation codes to perform in situ analysis and visualization at runtime. ParaView Catalyst was used with SENSEI in the largest-to-date in situ simulation run. The run was the first known to exceed the milestone of one million Message Passing Interface (MPI) processes.
Interactive Do-It-Yourself Artificial Intelligence
DIY AI enables end users – analysts, operators, engineers – to rapidly build, test, and deploy novel AI solutions without having expertise in machine learning or even computer programming. Using Kitware’s interactive DIY AI toolkits, you can easily and efficiently train object classifiers using interactive query refinement, without drawing any bounding boxes. You are able to interactively improve existing capabilities to work optimally on your data for tasks such as object tracking, object detection, and event detection. Our toolkits also allow you to perform customized, highly specific searches of large image and video archives powered by cutting-edge AI methods. Currently, our DIY AI toolkits, such as VIAME, are used by scientists to analyze environmental images and video. Our defense-related versions are being used to address multiple domains and are provided with unlimited rights to the government. These toolkits enable long-term operational capabilities even as methods and technology evolve over time.
Medical Image Analysis
Our expertise in the development of custom image analysis algorithms spans brain morphology assessment associated with mental disorders, tumor volume estimation for clinical trials, vessel modeling for stroke and tumor microenvironment research, multiparametric MRI prostate cancer assessment, deep learning for interpreting histology images, and a number of other applications. Building on our role in the creation and maintenance of libraries such as the Insight Toolkit (ITK) and applications such as 3D Slicer, we lead and partner on basic research grants, small business grants, and development contracts for the National Institutes of Health and the Department of Defense. These encompass nearly every aspect of medical image segmentation, registration, quantification, and computer-aided diagnosis.
In addition to working on grants and contracts, we can extend ITK and 3D Slicer with new algorithms to speed the deployment of pre-clinical and clinical products, as well as to collaborate on research investigations.
Kitware is a leader in scientific visualization, including medical data visualization. Kitware began with the open-source release of the Visualization Toolkit (VTK) in 1996, and that toolkit has become the leading visualization tool in multiple scientific domains, including medical imaging. VTK is capable of generating visualizations of exascale data using supercomputers, heterogeneous data (e.g., genomic as well as image data) using cloud resources, and composite geometric and volume rendered data on desktops; and then VTK can stream any or all of those visualizations to mobile devices, surgical microscopes, and augmented reality / virtual reality systems. Our philosophy is to innovate, promote, and support “pervasive visualization” whereby the data that you need to make a decision is presented to you in an intuitive format, when and where you need it, within your own workflows. Examples of our implementation of pervasive visualizations include the ITK-JupyterWidgets for visualizing data within the Jupyter Lab Python research environment, 3D Slicer for biomedical research data visualization, ParaView Glance for in-browser visualization of a wide variety of scientific data, and ParaView Server for visualization of high-fidelity biomedical simulations of blood flow and/or respiratory air motion.
Object Detection, Recognition, and Tracking
Our video object detection and tracking tools are the culmination of years of continuous government investment. Deployed operationally in various domains, our mature suite of trackers can identify and track moving objects in many types of intelligence, surveillance, and reconnaissance data (ISR), including video from ground cameras, aerial platforms, underwater vehicles, robots, and satellites. These tools are able to perform in challenging settings and address difficult factors, such as low contrast, low resolution, moving cameras, occlusions, shadows, and high traffic density, through multi-frame track initialization, track linking, reverse-time tracking, recurrent neural networks, and other techniques. Our trackers can perform difficult tasks including ground camera tracking in congested scenes, real-time multi-target tracking in full-field WAMI and OPIR, and tracking people in far-field, non-cooperative scenarios.
Kitware’s knowledge-driven scene understanding capabilities use deep learning techniques to accurately segment scenes into object types. In video, our unique approach defines objects by behavior, rather than appearance, so we can identify areas with similar behaviors. Through observing mover activity, our capabilities can segment a scene into functional object categories that may not be distinguishable by appearance alone. These capabilities are unsupervised so they automatically learn new functional categories without any manual annotations. Semantic scene understanding improves downstream capabilities such as threat detection, anomaly detection, change detection, 3D reconstruction, and more.
Software Process Implementation
We have developed standard software processes including agile methodologies, continuous testing, and verification. Our expertise comes from years of creating and running large-scale, distributed, open source development projects such as the Visualization Toolkit (VTK), the Insight Toolkit (ITK), and ParaView. This expertise has spurred a selection of tools including CMake, CTest and CDash. These tools allow for continuous, per branch, and nightly building and testing across all platforms, and they provide immediate feedback to developers for robust software. We have also worked with companies and organizations to establish and grow software development communities through an efficient software process.
Super Resolution and Enhancement
Kitware’s super-resolution techniques enhance single or multiple images to produce higher-resolution, improved images. We use novel methods to compensate for widely spaced views and illumination changes in overhead imagery, particulates and light attenuation in underwater imagery, and other challenges in a variety of domains. The resulting higher-quality images enhance detail, enable advanced exploitation, and improve downstream automated analytics, such as object detection and classification.
We are integrating artificial intelligence and deep learning technologies with custom ultrasound and augmented reality hardware to advance the use of ultrasound in a variety of applications. These applications include preclinical and clinical research, pre-hospital patient triage, bedside patient monitoring, and precision needle guidance. Our integrations are enabling less-experienced operators to complete the applications with confidence, in less time, and with expert-level outcomes. These technologies have been transitioned into several consulting projects and commercial products.
Virtual Simulation in Healthcare
Our experience with developing medical skill and procedural trainers includes developing the underlying real-time technologies such as fast numerical solvers, haptic rendering algorithms, advanced rendering for 2D and virtual reality displays, collision processing and custom hardware interfacing. These technologies are embedded in our Interactive Medical Simulation Toolkit (iMSTK) which is a C++ based open-source toolkit that aids rapid prototyping of interactive multi-modal surgical simulations. iMSTK features a highly modular and easy to use framework with a comprehensive ecosystem of tools and algorithms required to develop end-to-end medical planners and trainers. Besides access to the technologies that are exclusive to iMSTK, applications can benefit greatly from its interfacing with Kitware’s other open-source software tools such as VTK, 3D Slicer and Pulse. Such synergistic use of disparate software has broadened the range of medical applications that are possible and has already helped Kitware successfully build virtual trainers for laparoscopic camera navigation, kidney biopsy and osteotomy procedures.
Visualization and Analysis
Data analysis is often the most critical bottleneck in the scientific discovery process. The exponential growth of datasets in size, scale, complexity, and richness makes it difficult for researchers and scientists to analyze data and obtain insight. As data approaches exascale, this problem becomes even more pressing. Moreover, data is becoming increasingly complex, not only because of enhanced resolution but because of the integration of experimental observations and associated metadata within datasets. We address these large data analysis issues by providing expertise in high-performance computing, distributed visualization, and data processing. We develop the computational infrastructure and tools to power large data analysis and visualization, in particular VTK and ParaView. These tools enable scientists to tackle today’s most pressing research challenges.
We extend our heritage as leaders in scientific visualization with the Visualization Toolkit (VTK) through new visualization solutions for use in general data exploration and understanding. Our web visualizations bring together the best in modern visual analysis, lighting up data in new ways.
We have created an easily customizable web framework, ParaViewWeb, to build applications with interactive scientific visualization inside the web browser. Coupling our visualization expertise with next-generation web technologies, we have structured ParaViewWeb so that it provides a collection of visualization components to illustrate patterns and structure in large datasets. Each component highlights one of the many possible ways of viewing datasets. These visualization components can be integrated into a web-based workbench-like environment that provides new interfaces to support discovery, exploration, filtration, and analysis.
Let’s talk about your project.
Now that you know what we can do, let’s talk about how we can leverage these areas to benefit your project.