The yt project (http://yt-project.org/) aims to provide a set of analysis and visualization tasks for astrophysical datasets, primarily drawn from hydrodynamic simulations. yt has been used to analyze the formation of the first stars in the universe, galaxy mergers, present day star formations, and galaxy clusters. It is built in Python, is parallelized with mpi4py, and can conduct a number of tasks specific to astrophysical analysis and visualization.
Where yt has come up short has been the user interface and interactive 3D rendering. While yt has long had a scripting interface and API to perform tasks such as image creation and volume rendering, these have always been designed from the standpoint of deliberate, rather than exploratory, visualization. ParaView, on the other hand, has been designed from the ground up to allow and enable exploratory visualization with a strong quantitative component. It exposes a rich API and allows for very detailed interoperability with Python components.
In working with Jorge Poco, George Zagaris, Charles Law and Berk Geveci of Kitware for several months this year, yt has been instrumented to be callable from ParaView and enable bidirectional flow of data between yt and ParaView. This was accomplished first by allowing in-memory data structures in yt to be supplied to VTK as their corresponding data structures (with little to no duplication of memory), and then allowing VTK components to be supplied as raw data to yt. This is accomplished by utilizing the ProgrammablePythonSource capability in ParaView, and the streaming data front-end in yt.
Halo Detection & Analysis using yt plugin: (a) Resulting output after running the HOP algorithm. The halo region is encapsulated in the sphere, shown in white, overlaid on top of the particle input dataset (b) Streamlines seeded at the halo region visualizing the flow around the halo.
This interoperability allows yt to load data, process it (for instance, applying unit conversions), and supply that data to ParaView for processing. Conversely, data flowing in the other direction, from ParaView to yt, can then be processed and analyzed by yt with the results being passed back to ParaView. This allows for analysis such as halo finding, multi-resolution image generation, spectral energy distribution, and even multi-resolution (AMR) aware volume rendering to be conducted in yt and displayed in ParaView.
The simplest way to get data to ParaView from yt, while retaining full control over the process, is to create a 2D image and then wrap it with a VTK object. If you have yt in your ParaView-accessible Python path, you can do this with a ProgrammablePythonFilter that outputs image data.
|rom paraview.vtk import dataset_adapter as DA
pf = yt.mods.EnzoStaticOutput
proj = pf.h.proj(0, “Density”)
frb = yt.mods.FixedResolutionBuffer
(proj, (0.0, 1.0, 0.0, 1.0), (512, 512))
pdo = self.GetOutput()
pdo.SetSpacing(1, 1, 1)
import numpy as na
farr = yt.mods.na.log10(frb[“Density”])
arr = DA.numpyTovtkDataArray
A projection will be taken through the domain, pixelized into a fixed region, and then be supplied back to ParaView in the middle of the pipeline. Subsequent processing can be applied as well as displayed.
Jorge Poco developed a full-fledged plugin to ParaView for yt this summer by unifying the various approaches and using the ParaView plugin definition format. This plugin allows for interactive plotting of adaptive-resolution images, volume rendering utilizing a camera and transfer function edited directly in ParaView, and most importantly the ability to conduct both these yt-specific tasks while still utilizing the broad array of tasks available from ParaView.
In the future, we hope to expose more tasks from yt to Paraview, and vice versa. A particularly important aspect of this is going to use the co-processing capabilities of ParaView. By using its existing protocol for concurrent visualization with extremely large simulations, we hope to be able to use Paraview as an engine for visualization and analysis, on top of which we will apply astrophysics-specific tasks from yt. Exploring these possibilities will help to strengthen domain-specific analysis guided by interactive exploration, from small scales to large.
Matthew Turk is an NSF OCI Postdoctoral fellow at Columbia University, working on simulations of the universe’s first stars and galaxies. He has a PhD from Stanford University and is interested in the high-redshift universe, HPC, agile and immersive data analysis, and the development of infrastructure for next-generation simulations and visualization.