Off-screen Rendering through the Native Platform Interface (EGL)

October 9, 2015


ParaView can run on a supercomputer with thousands of nodes to provide visualization and analysis of very large datasets. In this configuration, the same version of the ParaView analysis pipeline runs on each node to process a piece of the data, the results are rendered in software using Off-Screen Mesa and composited into a final image which is send to the ParaView client for display.

Software rendering is used because, until recently, supercomputer nodes did not provide graphic cards as they were used mainly for computation. This is beginning to change with the release of new GPU Accelerators cards, such as NVIDIA Tesla, which can be used for both computation and off-screen rendering.

The Native Platform Interface (EGL) provides means to render to a native windowing system, such as Android, X Windows or Microsoft Windows, or to an off-screen buffer (without a need of a windowing system). For rendering API, one can choose OpenGL ES, OpenVG or, starting with EGL version 1.4, full OpenGL. 

We enable the VTK and the ParaView server (pvserver) to render to an EGL off-screen buffer. Through this work we allow server-side hardware-accelerated rendering without the need to install a windowing system.

Configuration parameters

To compile VTK or ParaView for off-screen rendering through EGL you will need:

  1. A graphics card driver that supports OpenGL rendering through EGL (full OpenGL rendering is supported only in EGL version 1.4 or later). We have tested our code with the NVIDIA driver version 355.11.
  2. You might need the EGL headers as they did not come with the Nvidia driver used in our tests. You can download them from Khronos EGL Registry.
  3. Set VTK advanced configuration option VTK_USE_OFFSCREEN_EGL​.

You’ll get a configuration error if any of the windowing systems is enabled: VTK_USE_X or VTK_USE_COCOA so you’ll have to disable your windowing system. You’ll also get an error if you are on WIN32​, ANDROID or APPLE_IOS.

If you have several graphics cards on you system you may need to set the index of the graphics card you want to use, if that is different than the default card chosen by the driver. You can do that if your driver supports EGL_EXT_platform_device and EGL_EXT_device_base extensions.

You can set the default graphics card used by the render window in VTK by setting the advanced configuration option VTK_EGL_DEVICE_INDEX​ to an integer such as 0 or 1 for two cards installed on a system. By default, this variable is set to 0 which means that the default graphics card is used. We are investigating using a more user friendly mechanism such as the name of the graphics card. We note that the index of the graphics card you need to pass is the same as the index of the card returned by the following command nvidia-smi.

Runtime parameters

For a system with more then one graphics card installed, you can choose the graphics card used for rendering at runtime, in case it is different that the card setup at configuration time.


If you want to change the graphics card set through the configuration process, you can call vtkRenderWindow::GetNumberOfDevices() to query the number of devices available on a system and vtkRenderWindow::SetDeviceIndex(deviceIndex)​ to set the device you want to be used for rendering.


To start pvserver​ with rendering set on a graphics card different than the card set through the configuration process, you have to pass the following command line parameter:

–egl-device-index=<device_index>, where <device_index> is the graphics card index.

To check if you are rendering to the correct graphics card in ParaView you can use Help, About, Connection Information, OpenGL Renderer.


  1. Make sure that EGL_INCLUDE_DIR, EGL_LIBRARY, EGL_gldispatch_LIBRARY, EGL_opengl_LIBRARY point to valid headers and libraries. On Ubuntu 16.04 with NVidia driver version 361.42 the libraries are: /usr/lib/nvidia-361/, /usr/lib/nvidia-361/ and /usr/lib/nvidia-361/
  2. Pass –disable-xdisplay-test to pvserver if this option exists. We have seen a case when this test creates problems with the EGL rendering

We hope you enjoy this new feature. It is available in the VTK and ParaView git repositories.


Thanks to Peter Messmer from NVidia for answering all our EGL questions.

Update 7/17/2017

We recently changed some of the parameter names described in this article. For more information see ParaView and Offscreen Rendering in the ParaView Documentation.

12 comments to Off-screen Rendering through the Native Platform Interface (EGL)

  1. Note that the library file names for EGL 358.13 are:
    EGL_LIBRARY /usr/lib/x86_64-linux-gnu/
    EGL_gldispatch_LIBRARY /usr/lib/x86_64-linux-gnu/
    EGL_opengl_LIBRARY /usr/lib/x86_64-linux-gnu/

  2. This post mentions “You’ll also get an error if you are on WIN32​, ANDROID or APPLE_IOS” … Is it possible to do the following on ANDROID? : (1) off-screen rendering from VTK 7.0, (2) get image data to OpenCV Mat and have OpenCV do rendering

  3. Off-scren rendering requires full OpenGL (rather than OpenGL ES) which is not available on Android.

  4. “–egl-device-index=, where is the graphics card index” is not working with Paraview 5.1.2 built with MPI for Linux (Centos 6.6, with Nvidia Driver 361.45.18 – on multi GPU nodes cluster). Only index 0 can be selected, whatever is passed at runtime. Any hint ? Thanks

    1. Ludovic, Indeed there seems to be a problem passing the device index from command line. Try recompiling with VTK_EGL_DEVICE_INDEX=1

      1. Thanks for fast reply. I will try. Does the VTK_EGL_DEVICE_INDEX=1 allow only for device 1 (instead of 0) or does it enable to receive any device_index from command line ?

        1. Seems that the command line path is broken, so it will only allow for device 1. We’ll have to look at/fix the command line option.

      1. Thanks for the fix. Removing the suggested line from the binaries and recompiling seems to fix the command line option for device-index. EGL seems to work fine on several mutli-GPU (Titan X) nodes.

Leave a Reply