The NSMB solver solves the Navier Stokes equations using the finite volume approach. Space discretization schemes include central and upwind methods. The equations are integrated in time using the LU-SGS scheme.
NSMB offers a variety of modern turbulence models. Different levels of chemistry modeling are available for hypersonic flows, ranging from chemical equilibrium to thermal chemical non-equilibrium. NSMB has been parallelized using the SPMD paradigm using MPI as message passing.
ParaView has been an important asset to this small company's portfolio of tools, and is used in mesh and fluid flow visualization tasks.
The NSMB solver archives its data in a proprietary file format. Large and complex flight geometries can produce gigabytes of data for steady and transient simulations.
Our interface has grown alongside the release cycles of ParaView, and we are today, extremely grateful to Kitware and the open source community at large for all the features available. Our ParaView plugin for the NSMB solver supports a wide range of data analysis tasks, which we succinctly present here.
Hierarchical Meshes, Quantitative and Qualitative Displays
Our reader plugin relies on a low-level access API for the coordinates and data arrays, and lets us concentrate on building the VTK objects used by a hierarchy of multiple containers of 3D and 2D grids. Enabling parallel reading was particularly important, given today's multi-core servers and desktops. In fact, our rendering needs are often easily handled by a single hardware renderer (the ParaView client), whereas a pvserver can share the work load amongst several machines.
Figure 1: A collapsed view of our hierarchy of multi-block datasets.
A typical flight configuration may include several thousands of body-fitting 3D curvilinear grids, along with surface patches for the solid geometry, and the different boundary condition's walls. The data will include standard flow variables (density, pressure, velocity) and wall shear stress vectors. ParaView enables the graceful presentation and sub-setting of such hierarchies of data blocks along with data fields and multiple time-cycles.
Multi-parameter studies, where flow conditions may change and the simulation is re-executed can take advantage of the side-by-side viewing available in Comparative View mode, as shown below for a hypersonic flow simulation.
Figure 2: Transonic Cruiser Configuration from EU funded project SimSAC at different angles of attack
The integrated view possibilities enabling 3D geometry viewing, spreadsheet displays of tabular data, XY plots, and synchronized selection (highlighted in purple below), between different viewports are also particularly handy.
Figure 3: Density profiles around a wing side-cut and tabular data on the nacelle.
Parallel Execution and Python Scripting
Scripting is the key to a great visualization tool, and is vastly improved if done with Python. ParaView 3.6.2 now integrates a trace tool, assisting the user in generating human-readable Python scripts. We foresee that the use of state files will eventually disappear thanks to this new feature.
We rely on Python scripting to automate the more mundane tasks, or to add functionality not available in the GUI. The following example illustrates how we can operate on FieldData (there are currently no filters enabling access to the FieldData). Our multi-patch surface geometries are encoded with a single integer, telling us what the boundary condition type is (e.g. solid, symmetry, inflow/outflow walls) or what the solid object is (wheel, wing, nose, body, nacelle). Part selection requires access to this 1-tuple array called “BCcode” and stored in FieldData. A Python Programmable Filter is the perfect tool to traverse the hierarchy of structured grids, shallow-copying only those matching a user-selectable set of codes, stored in a Python list:
ParaView's trace tool enables rapid-prototyping, fine tuning, and execution with pvbatch, the parallel server of ParaView. Visualization campaigns that would last several days, and were driven from state files after some cumbersome and error-prone hand-editing, can now be replaced by Python scripts. The scripts are also our preferred method for regression testing.
Examples of a Few CFD-Specific Filters
The following Python script drives our drawing of wall-bound shear stress lines of Figure 4.
Plot-over-sorted-lines, a recent addition to ParaView, allows us to draw an XY plot of pressure contours across wing side-cuts (left Figure 3). This is a must-have for CFD studies.
A new addition to ParaView is a query engine, enabling selection of nodes or cells by field values. Finding, for example, the low-pressure points downstream of a flying aircraft to study wing-tip vortices is now a trivial task with a simple query: Select Points where Pressure is less than 121000.
The points are then used as seeds for streamlines and they highlight quite well the downstream high vorticity region, as shown below:
Figure 4: Wing-tip vortices ending in a low-pressure zone.
New representation plugins, are of outstanding value, to enhance our visual experience. Take the new Line Integral Convolution plugin, integrated into ParaView 3.8, and compare the two pictures below which depicts the wall shear stresses on a wing. Recent work by Zhanping Liu at Kitware on the stream-tracer filter had already improved our ability to draw the shear stress lines (left of Figure 5). Yet, this remains a very difficult task to automate. Challenges include the difficulty to control the density of streamlines; problems with very thin geometries (upper and lower sides of a wing along edges) and Z-fighting of streamlines showing both sets of lines along the edges; streamlines which gradually "lift" above the wings, and thus are stopped. The LIC painter resolves these issues immediately (right image), can be run across multi-grid surfaces, and offers real-time display where tuning is unnecessary on today’s commodity graphics cards.
Figure 5: Streamlines and LIC texture for the shear stresses on the tail wing of a supersonic jet.
Multi-block containers in VTK do not hold neighboring attributes. Yet, our flow solver data includes block-to-block and abutting patch information, and we think that some filters could potentially use this metadata to speed up spatial search tasks for example, in streamline computation. Topology-based filters could also provide information such as vortex cores, and critical point analysis beyond the qualitative displays rendered by the LIC painter.
Our parallel strategy for grid placement on the different MPI ranks of the ParaView server currently uses a round-robin assignment. Surfaces patches, which are special VOIs of the 3D blocks are also kept by the processor holding the grid. Yet, ordering comes from the mesh partitioner and can vary quite a bit, bringing together blocks which are far apart. We are currently working on optimizing this block-to-processor mapping with a special option to align grids along the major fluid flow direction (upstream to downstream), we hope to derive some optimization by making streamline computation more local to each processor.
 CFS Engineering: www.cfse.ch/cfse/site/tools-NSMB.php
Jean M. Favre is a research engineer at the Swiss National Supercomputer Center (CSCS) in Manno, Switzerland, where he supports multiple visualization projects with ParaView, from astrophysics, to biomechanics, fluid dynamics, geophysics, and molecular dynamics
Jan Vos, leads CFS Engineering with his experience in fluid mechanics simulations, including Magneto Hydrodynamic Flows, combustion, hypersonic flows and flows for aeronautical applications. Jan is also lecturer at the Laboratory of Computational Engineering at the EPFL.