Wide area video sensors can generate several gigabytes of raw video data a second and hundreds of terabytes over a mission, creating a need for efficient methods of compressing this data for downlink and archive. There are standard compression techniques available, but none that utilize the fact that the world is static in 3D. With this concept, Kitware is developing techniques to significantly increase compression of wide-area video using 3D models.
To compress the video in such a manner, the initial step is to separate the foreground and background and distinguish dynamic scene elements. In determining which dynamic elements need to be represented, it is critical to consider short-, long-, and very long-term changes that will affect the scene. By determining which elements must be represented, the focus can be on replacing the background with a 3D model to enable compression. This 3D model contains viewpoint and time-dependent appearance data, necessary for fully understanding the scene. Through this sort of compression, there is a significant storage and efficiency gain, necessary for the increasingly large datasets being ingested.