Novel Mechanism Segments Anatomical Structures for 3D Printing

A fundamental task in most aspects of medical image computing is segmentation, i.e., delineation of anatomical structures of interest for further processing and quantification. Segmentation can be manual, it can be semi-automatic (through the initialization of an algorithm with limited input), or it can be fully automatic (through an autonomous algorithm). A multitude of software tools and algorithms exist for each type of segmentation, and segmentation has served as the subject of extensive research in the field of medical image computing.

Most commonly, three-dimensional (3D) binary volumes (labelmaps) store segmentation results. Each volume simply indicates whether a volumetric element (voxel) is inside or outside of the structure of interest. Three-dimensional binary volumes are optimal for most processing algorithms. To visualize structures, however, surface models are optimal. Instead of a structured grid of voxels, each surface model consists of a point cloud. Triangles connect the point cloud, which a 3D visualization can illustrate.

Several other representations exist. In radiation therapy (RT), the Digital Imaging and Communications in Medicine (DICOM) standard [1] specifies planar contours. In addition, certain segmentation algorithms yield labelmaps with voxels that indicate probabilities instead of binary decisions. Even more obscure representations include ribbons, which help to quickly visualize planar contours.

A brain stem structure appears in different representations. “A” shows planar contours, “B” shows a closed surface, “C” shows a binary labelmap, and “D” shows a ribbon model.

Unfortunately, representing and processing anatomical structures present major difficulties. These difficulties include operation, as it may be necessary to perform conversion. Another difficulty is identity, since it is important to keep track of the origin (provenance) of the structures and what they represent. A third difficulty is validity: Representations may change after conversions, so invalid data must remain inaccessible at all times. Coherence serves as an additional point of difficulty. As structure sets typically correspond to the same entity (i.e., the patient), the in-memory objects that relate to the structures need to form a unified whole.

3D Slicer [2] is one of the most popular open-source platforms in the world for medical image analysis and visualization. The Laboratory for Percutaneous Surgery (PerkLab) at Queen’s University uses 3D Slicer extensively for a wide variety of research projects. Previously, representing segmentations in 3D Slicer posed considerable hurdles, not only in terms of the four above mentioned difficulties but also in regards to processing time. Researchers and clinicians spent time manually converting one representation to another. For example, they used the ModelMaker module to create 3D surfaces of labelmaps. They did this every time the labelmaps changed.

A Mechanism for Dynamic Anatomical Structure Management

PerkLab designed and implemented a new data type in 3D Slicer (and before that in the SlicerRT extension [3]), called segmentation node. This data type contains multiple structures and multiple representations in the same object, which addresses the “identity” difficulty. (See the following image.) The structures can remain synchronized after changes occur in the underlying data, which attends to the “coherence” difficulty.

The data type is part of a complete mechanism that manages the contained representations. Whenever 3D Slicer requests a representation—such as when the 3D view tries to access the surface model of a segmentation—the mechanism automatically performs conversion. This overcomes the “operation” difficulty.

A master representation makes sure that invalid data does not become available, which tackles the “validity” difficulty. The master representation is the representation in which the data creation initially occurred (i.e., the labelmap, when segmenting manually or semi-automatically, and the planar contour, when loading from DICOM-RT). The master representation also serves as the source of all conversions. When the master representation changes, e.g., during the segmentation of an organ, all other representations get cleared. The representations get re-converted, when they become requested again.

An example of the segmentation node data type stores each structure of an entity (patient) with each representation in one place.

PerkLab carefully chose and implemented conversion algorithms to cover the widest variety of datasets. (For more information, see “Reconstruction of surfaces from planar contours through contour interpolation” [4].) A directed graph drives the algorithms. The graph contains the representations as nodes and the conversion algorithms as edges. It attempts to find the computationally cheapest conversion between the master representation and the requested representation.


The infrastructure of the mechanism potentially spans every workflow that includes segmented anatomical structures. As the core implementation only depends on the Visualization Toolkit (VTK) [5], a wide variety of software tools in the field of medical image computing are already compatible for adoption. Similarly, in 3D Slicer, the infrastructure potentially spans all segmentation-related operations.

The design and implementation of the new Segment Editor module started from scratch. The module aspires to be the main module for manual and semi-automatic segmentation. It is the successor of the 3D Slicer Editor module, which served as the starting point for the Segment Editor design. The user interface of Segment Editor is very similar to that of Editor, which makes the transition easier for users. The underlying implementation, however, is very different. It is, in some ways, simpler. New features include these:

  • real-time visualization of the 3D surface that results from the current state of the edited labelmap;
  • support for overlapping segments (e.g., structures) and advanced masking options;
  • a Segments Table panel, which allows robust per-structure handling and provides advanced visualization settings;
  • brushes that paint in 3D rather than in a slice-by-slice manner;
  • a Segment Editor widget, which any module or slicelet (i.e., a stripped-down user interface that focuses on a single task and provides a streamlined workflow) can embed;
  • editing options for oblique slices that do not align with the main axes of the anatomical volume; and
    new editor tools or effects such as Scissors, Fill between slices, Grow from seeds, etc.

The overall mechanism fills the basic needs of manual and semi-automatic segmentation. Thus, a wide range of applications can benefit from it. Since the mechanism makes it easy to convert representations from one data type to segmentation node, users can rely on the mechanism, even if their data comes from a third-party tool.

The aforementioned field of RT was the first target area for the mechanism, as most workflows in the field use numerous types of data representations. Visualizing and analyzing RT datasets became immensely more straightforward and robust since the adoption of the mechanism. More specialized use cases have also benefited from its utilization. These use cases include the fusion of magnetic resonance and ultrasound images for brachytherapy applications [6]. Use cases also include the evaluation of dosimetric measurements with gel [7] and film dosimeters. To facilitate more widespread adoption, however, a more prominent and mainstream use case needed to demonstrate and prove the mechanism.

3D Printing Tutorial

3D printing is a popular and continually growing field, and so it offers a natural use case that presents the power of the new module and infrastructure. A recent tutorial [8] walks through the necessary steps to create a spine phantom from a partial spine structure and a phantom base. In the tutorial, 3D Slicer segments the partial spine structure on a computed tomography (CT) image. The phantom base comes from a separate computer-aided design (CAD) application. The base facilitates reproducible and stable placement of the phantom in its plastic container, and it incorporates a marker holder into which electromagnetic (EM) sensors can accurately fit. Thus, the tutorial exemplifies the use of an external CAD design with a custom segmentation from 3D Slicer.

A simulation for facet joint injection training [9] inspired the design of the phantom. The simulation teaches trainees the complex technique of needle insertion but does not involve a live patient. It aims to support competency-based education by quantitatively evaluating the collected electromagnetic tracking and video data with the PerkTutor extension [10]. The extension is a training platform for image-guided interventions.

The tutorial results in a printed training phantom.

The tutorial uses a recent 3D Slicer nightly release instead of version 4.6.2. Although the modules in version 4.6.2 are stable, the 3D Slicer community continues to add useful features to Segment Editor. These features increase usability, and they provide new and convenient segmentation effects.

The tutorial can run on all of the operating systems that support 3D Slicer, which include Windows, macOS, and Linux. The featured effects are as follows.

  • Threshold: This effect provides a very simple segmentation tool that creates a segment from a range of intensity values in the anatomical image. It also specifies an intensity-based mask.
  • Islands: This effect handles connected components in a segment.
  • Scissors: This effect selects areas in two dimensions and volumes in three dimensions. Segment Editor can erase these selections, or it can fill inside or outside of them.
  • Logical Operators: This effect contains a set of morphological operators that add, subtract, intersect, etc.

Segment Editor includes many other editor effects, and the module can obtain more through downloadable extensions. Using the above four effects, however, it is possible to create a printable patient-specific phantom, as it is easy to export segmentation results to a STereoLithography (STL) file that is ready for 3D printing.


To address a very common need in medical image computing, PerkLab implemented a complex software infrastructure in 3D Slicer that facilitates more automated and structured handling of segmentation results. PerkLab created Segment Editor to provide an easy-to-use tool to manually or semi-automatically segment anatomical structures.

3D Slicer 4.7 creates the final phantom model.

A tutorial demonstrates the features of these developments. During the tutorial, 3D Slicer generates a partial spinal column segmentation and merges it with a CAD model to create a training phantom that can become printed in three dimensions.


This mechanism and related modules have undergone development for years, over multiple iterations, and they are now stable. In the spirit of continual improvement, however, PerkLab encourages the community to contact developers to convey their ideas and comments via the 3D Slicer forum on Discourse [11].

To further help the community, PerkLab plans to create a video tutorial of the same workflow with more examples that use Segment Editor. Also, PerkLab plans to add more modules to the mechanism such as the Segment Statistics module, which calculates volume, image intensity, and other statistics on segmented structures. Other work will integrate fractional labelmaps. Although these store segmentations in the same resolution as binary labelmaps, the segmentations have considerably higher accuracy. In addition, fractional labelmaps provide support for probabilistic segmentations. Updates will follow.


Funding for this work came in part from Cancer Care Ontario, through Applied Cancer Research Unit and Research Chair in Cancer Imaging grants, and from Ontario Consortium for Adaptive Invention in Radiation Oncology.


  1. Mildenberger, Peter, Marco Eichelberg, and Eric Martin. “Introduction to the DICOM standard.” European Radiology 12 (2002): 920-927.
  2. Fedorov, Andriy, Reinhard Beichel, Jayashree Kalpathy-Cramer, Julien Finet, Jean-Christophe Fillion-Robin, Sonia Pujol, Christian Bauer, Dominique Jennings, Fiona Fennessy, Milan Sonka, John Buatti, Stephen Aylward, James V. Miller, Steve Pieper, and Ron Kikinis. “3D Slicer as an image computing platform for the Quantitative Imaging Network.” Magnetic Resonance Imaging 30 (2012): 1323-1341.
  3. Pinter, Csaba, Andras Lasso, An Wang, David Jaffray, and Gabor Fichtinger. “SlicerRT: Radiation therapy research toolkit for 3D Slicer.” Medical Physics 39 (2012): 6332-6338.
  4. Sunderland, Kyle, Boyeong Woo, Csaba Pinter, and Gabor Fichtinger. “Reconstruction of surfaces from planar contours through contour interpolation.” In Proc. SPIE, 94151R. Medical Imaging 2015: Image-Guided Procedures, Robotic Interventions, and Modeling, Orlando, Florida. The International Society for Optics and Photonics, 2015.
  5. Schroeder, Will, Ken Martin, and Bill Lorensen. The Visualization Toolkit: An Object-Oriented Approach to 3D Graphics (4th edition). Clifton Park: Kitware, 2006.
  6. Poulin, Eric, Karim Boudam, Csaba Pinter, Samuel Kadoury, Andras Lasso, Gabor Fichtinger, and Cynthia Ménard. “Validation of MRI To US Registration For Focal HDR Prostate Brachytherapy Treatment.” In proceedings of the 2017 ABS annual meeting. The Value of Brachytherapy in Multidisciplinary Cancer Care, Boston, Massachusetts. American Brachytherapy Society, 2017.
  7. Alexander, Kevin M., Csaba Pinter, Jennifer Andrea, Gabor Fichtinger, and L. John Schreiner. (2015). “3D Slicer gel dosimetry analysis: Validation of the calibration process.” In IFMBE Proceedings, 521-524. World Congress on Medical Physics and Biomedical Engineering, Toronto, Canada. Springer International Publishing, 2015.
  8. Pinter, Csaba. “Segmentation for 3D Printing” (tutorial presented at the 24th project week of the National Alliance for Medical Image Computing, Cambridge, Massachusetts, January 9-13, 2017).
  9. Moult, Eric, Tamas Ungi, Mattea Welch, J. Lu, Robert C. McGraw, and Gabor Fichtinger. “Ultrasound-guided facet joint injection training using Perk Tutor.” International Journal of Computer Assisted Radiology and Surgery 8 (2013): 831-836.
  10. Ungi, Tamas, Derek Sargent, Eric Moult, Andras Lasso, Csaba Pinter, Robert C. McGraw, and Gabor Fichtinger. “Perk Tutor: An Open-Source Training Platform for Ultrasound-Guided Needle Insertions.” IEEE Transactions on Biomedical Engineering 59 (2012): 3475-3481.
  11. “3D Slicer.” April 19, 2017.

Csaba Pinter is a research software engineer at Queen’s University in Canada. He spent several years in the industry, working on projects for medical image computing. He now delves into the world of open source. He is an active contributor to 3D Slicer, and he is the owner of the SlicerRT toolkit for radiation therapy. His main interest is the design of innovative medical applications.

Andras Lasso is a senior software engineer and the associate director of PerkLab. He joined PerkLab in 2009, after he spent nine years at GE Healthcare as a software engineer. His main interests are the development of high-quality reusable open-source software components and the employment of these components for building systems for translational research and clinical use.

Gabor Fichtinger is currently the director of PerkLab. He received his doctoral degree in computer science from Budapest University of Technology and Economics in 1990. He is a professor of computer science, electrical engineering, mechanical engineering, and surgery at Queen’s University, with adjunct appointments at Johns Hopkins University. He specializes in robot-assisted image-guided needle-placement procedures, primarily for cancer diagnosis and therapy.

Leave a Reply