ParaView comes packed with a wide range of tools directly out of the box which are more than enough for many applications. Nevertheless, sometimes you want to add some functionality to scratch a specific itch that no existing built-in tool does in the way that you want. Of course, there is always the option to open up the ParaView source code and start tweaking and adding C++ code but this may be daunting at first if you are new to ParaView and VTK, or just don’t want to deal with compiling code. An often simpler solution is to leverage ParaView’s extensive support for Python.
ParaView provides two main ways of using Python, namely processing data in programmable filters and automating ParaView workflows via stand-alone scripts and the embedded Python shell. The former can be used to quickly test custom algorithms while the latter can be used to fully automate anything that can you can do via the ParaView graphical interface. Both allow the user to integrate third-party libraries with Python bindings, which creates many interesting possibilities.
Reproducing The 3D Depth Illusion
One of these possibilities is to easily re-implement the 3D depth illusion shown below (source). This illusion received a lot of attention when it was implemented on a smart phone last year. The same effect is nevertheless possible on any “dumb” screen with just a webcam and and OpenCV.
Face-Tracking In ParaView With OpenCV
OpenCV is a well-known C/C++ library with Python bindings widely used in real-time computer vision. With its extensive range of functionality, it has essentially become the Swiss army knife of the field. The following video shows how OpenCV’s facial recognition can be used to create an immersive 3D effect in ParaView. The effect is enhanced by the monocular camera view but even with two eyes on the screen it still gives an increased sense of depth.
The effect is achieved by tracking the user’s face with a webcam and OpenCV. The relative position of the face in the image is then used to determine the angle at which the scene should be shown and the scene’s camera position is updated accordingly. The result is that the scene appears to physically “float” in the middle of the screen.
The examples in the demo show that this effect works at very different scales for the large range of datasets that ParaView typically handles. Specifically, the datasets in the video are
- Simple geometric shape.
- Board games reconstructed from smartphone videos.
- A map reconstructed from LiDAR with VeloView.
- Anatomical data showing a head and cranium.
- A caffeine molecule loaded from a PDB file.
Just like ParaView, the face-tracking code is open-source and available on GitLab. Note that it is currently just a quick proof-of-principle demonstration with plenty of room for improvement. Feel free to test it out with other examples but don’t expect it to be robust enough yet for all situations.
Setting up face-tracking is relatively straightforward with the code, which consists of 2 main components. One component is a simple script that runs in a terminal and connects to a webcam. The other component is a Python module that is imported within ParaView’s Python shell. A single object is then created to continuously update the scene’s active camera and refresh the view. The code will then run in the background once launched until the user stops it. Launching the camera adjuster requires just 4 short lines of Python code in the shell:
import adjust_camera cam = GetActiveCamera() ca = adjust_camera.CameraAdjuster(cam, Render) ca.start()
The README in the source repository provides detailed instructions of how to set up the face-tracking in ParaView. There is also a convenient Bash script that can automatically launch ParaView with the webcam manager in the background. The script will ensure that the associated Python modules are found by ParaView. It will also print out a single command on the terminal that the user can copy-paste into the Python Shell. This command will set up the camera adjuster as above and print out instructions directly in the shell.