On the 31st of October, yes that’s Halloween, I successfully defended my PhD thesis/got doctorified. After presenting PelVis at IEEE VIS in Baltimore, I flew straight back to the Netherlands for this special occassion. Jetlag notwithstanding, I did manage to answer one hour’s worth of questions and can now officially call myself doctor (not that I would ^^). I’ve got some pictures for you, my faithful blog readers, of course, taken by my brother and official event photographer Joeri Smit:
I made a special page for my thesis, with some more details and a download link to the pdf version here. I had an amazing day, and would like to thank all those involved for this (read the Acknowledgments chapter in my thesis for a more wordy thank you ;)).
There were three winners in this image contest, and besides our pelvis, also Sergej Stoppel and Niels de Hoon received a prize, with their works entitled ‘Arteries in focus’ and ‘Turbulent flow in an aorta’, respectively:
I sent in the following submission:
Noeska Smit, Kai Lawonn, Annelot Kraima, Marco de Ruiter, Hessam Sokooti, Stefan Bruckner, Elmar Eisemann, Anna Vilanova
This image depicts PelVis , an interactive application for surgical planning for the Total Mesorectal Excision (TME) procedure. During this surgical procedure, undesired side-effects occur in up to 80% of the cases due to damage to the autonomic nerves. These nerves are damaged easily, since they are not visible in pre-operative MRI or even during the surgery. In order to visualize these nerves, we built an atlas model, the Virtual Surgical Pelvis (VSP) , that reveals zones in which the autonomic nerves reside based on cryosection and immunohistochemical studies. In the PelVis application, we register this atlas to patient-specific clinical MRI data and thus are able to make patient-specific virtual models of the individual patient, and to reveal the autonomic nerve zones pre-operatively, as displayed here in yellow. We highlight the distance of the mesorectal wall to these nerve zones using a colormap (red to white) combined with isolines. Furthermore, other surgically relevant anatomy is shown for spatial context, without occluding the view on the mesorectum, and the linked atlas-enriched MRI data can be explored interactively .
: Smit, N., Lawonn, K., Kraima, A., DeRuiter, M., Sokooti, H., Bruckner, S., … & Vilanova, A. PelVis: Atlas-based Surgical Planning for Oncological Pelvic Surgery. (2017) IEEE Transactions on Visualization & Computer Graphics, (1), 1-1. Accepted, to appear.
: Kraima, A., Smit, N. N., Jansma, D., West, N. P., Quirke, P., Rutten, H. J., … & DeRuiter, M. C. (2014). 62. The virtual surgical pelvis: A highly-detailed 3D pelvic model for anatomical education and surgical simulation. European Journal of Surgical Oncology, 40(11), S32.
Our paper ‘ Sline: Seamless Line Illustration for Interactive Biomedical Visualization’ was accepted for presentation at VCBM 2016, the 6th Eurographics Workshop on Visual Computing for Biology and Medicine. I’ve attended all VCBM editions since 2012, and am happy I can attend this one as well in Bergen, Norway. Which is extra convenient, since it’s my new hometown! I accepted a position as a researcher in the amazing visualization group at the University of Bergen and just started this week ^^
Back to Sline though, it’s a cool technique where you can pick an illustrative rendering style per structure using a single parameter slider. Behold:
abstract: In medical visualization of surface information, problems often arise when visualizing several overlapping structures simultaneously. There is a trade-off between visualizing multiple structures in a detailed way and limiting visual clutter, in order to allow users to focus on the main structures. Illustrative visualization techniques can help alleviate these problems by defining a level of abstraction per structure. However, clinical uptake of these advanced visualization techniques so far has been limited due to the complex parameter settings required.
To bring advanced medical visualization closer to clinical application, we propose a novel illustrative technique that offers a seamless transition between various levels of abstraction and detail. Using a single comprehensive parameter, users are able to quickly define a visual representation per structure that fits the visualization requirements for focus and context structures. This technique can be applied to any biomedical context in which multiple surfaces are routinely visualized, such as neurosurgery, radiotherapy planning or drug design. Additionally, we introduce a novel hatching technique, that runs in real-time and does not require texture coordinates. An informal evaluation with experts from different biomedical domains reveals that our technique allows users to design focus-and-context visualizations in a fast and intuitive manner.
Our paper ‘PelVis: Atlas-based Surgical Planning for Oncological Pelvic Surgery’ was accepted for presentation at our largest conference, VIS (and publication in IEEE Transactions on Visualization and Computer Graphics)!
Abstract: Due to the intricate relationship between the pelvic organs and vital structures, such as vessels and nerves, pelvic anatomy is often considered to be complex to comprehend. In oncological pelvic surgery, a trade-off has to be made between complete tumor resection and preserving function by preventing damage to the nerves. Damage to the autonomic nerves causes undesirable post-operative side-effects such as fecal and urinal incontinence, as well as sexual dysfunction in up to 80 percent of the cases. Since these autonomic nerves are not visible in pre-operative MRI scans or during surgery, avoiding nerve damage during such a surgical procedure becomes challenging.
In this work, we present visualization methods to represent context, target, and risk structures for surgical planning. We employ distance-based and occlusion management techniques in an atlas-based surgical planning tool for oncological pelvic surgery. Patient-specific pre-operative MRI scans are registered to an atlas model that includes nerve information. Through several interactive linked views, the spatial relationships and distances between the organs, tumor and risk zones are visualized to improve understanding, while avoiding occlusion. In this way, the surgeon can examine surgically relevant structures and plan the procedure before going into the operating theater, thus raising awareness of the autonomic nerve zone regions and potentially reducing post-operative complications. Furthermore, we present the results of a domain expert evaluation with surgical oncologists that demonstrates the advantages of our approach.
I recently mentioned a certain OAH webviewer short paper that was accepted as a EuroGraphics Education paper, but I now have some more news to share: the webviewer will be used in a Coursera course on anatomy of the abdomen and pelvis that starts this week, entitled ‘Anatomy of the Abdomen and Pelvis; a journey from basis to clinic.’! This course is a Massive Open Online Course (MOOC), a free online course aimed at unlimited participation, and organized by highly qualified people at the Leiden University Medical Center.
So far, already 7,796 people signed up. If you also have an interest in anatomy, why not sign up yourself? It’s completely free and completely awesome, I promise! I do have to add, it is not for the faint of heart, since it features live anatomical dissection videos. I am following the course myself and really enjoying the classes. I even just passed my first quiz with 100% correct answers. Try and beat that 😉
Soon, you could be playing around with this yourself:
Looking forward to seeing you as my virtual classmates in the course 🙂
So until last year, I never worked with shaders in VTK (the Visualization Toolkit). This was kind of sad actually, but I thought it would be super difficult, the need didn’t really arise and well, ain’t nobody got time for that! For our VIS 2016 submission though, I finally got to play around with them. And spoiler alert: it’s not that difficult and even kind of fun! Especially with a little help from highly intelligent co-authors 🙂
So today I would like to share a little Python example I made as a tutorial, featuring a cel-shaded (or toon-shaded if you’re one of those people) donut or skull in exactly 100 lines of code, probably half of them comments. Before we get started, a little sneak preview:
Right, so let’s get started! I’m using VTK version 6.3 built with the ‘old’ OpenGL backend and not the newer OpenGL2 one, in which custom shaders are handled in a different way. I’m using 64-bit Python 2.7 to go with this. You can find the skull surface and code you need here: shaderfun.
VTK supports GLSL shaders that you can add to your vtkActors. The easiest way to do this is, in my opinion, by specifying your shaders as multi-line strings using triple quote block (”’ bla ”’ or “”” bla “””). You can define a vertex and fragment shader in this way. To get your shader to work with VTK, you need to define a propFuncVS in the vertex shader, and a propFuncFS function in the fragment shader. These are the ones we’re using today:
The vertex shader:
vert = """
varying vec3 n;
varying vec3 l;
n = normalize(gl_Normal);
l = vec3(gl_ModelViewMatrix * vec4(n,0));
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
Once you made some shaders, you want to add them to your vtkActor so that VTK knows which objects to shade. First you make a vtkShaderProgram2 and set its context to your render window. Then you make a vtkShader2 for every shader you made, e.g., one for the vertex shader and one for the fragment shader, set the source codes to your multiline GLSL strings. and set their context to that of the vtkShaderProgram2. Now you can add the shaders to the program. You get the OpenGL property of your actor and call SetPropProgram to attach your shader program to the actor. Finally, just set the Shading to On of your property and you are good to go! Did I make it sound difficult? It’s not, here’s the relevant code:
# Now let's get down to shader-business...
# First we make a ShaderProgram2 and set it up on a date with the RenderWindow
pgm = vtk.vtkShaderProgram2()
# For both the vertex and fragment shader, we need to make a Shader2
# Also set them up with the RenderWindow, by asking the ShaderProgram2 for an introduction
shaderv = vtk.vtkShader2()
shaderf = vtk.vtkShader2()
# Now we add the shaders to the program
# And tell the actor property that it should totally use this cool program
openGLproperty = actor.GetProperty()
That’s it for my basic tutorial on how to use shaders in VTK. If people are interested, I could make a more elaborate tutorial on how to pass attributes, use textures, and scalar mesh properties in your VTK shaders 🙂 Hope this was helpful!
The thrice rejected, thrice cursed work on visualizing anatomical variations in branching structures has been accepted as a short paper at EuroVis 2016! This means I can finally show you a video of the work without jeopardizing the double-blind review process:
Abstract: Anatomical variations are naturally-occurring deviations from typical human anatomy. While these variations are considered normal and non-pathological, they are still of interest in clinical practice for medical specialists such as radiologists and transplantation surgeons. The complex variations in branching structures, for instance in arteries or nerves, are currently visualized side-by-side in illustrations or expressed using plain text in medical publications.
In this work, we present a novel way of visualizing anatomical variations in complex branching structures for educational purposes: VarVis. VarVis consists of several linked views that reveal global and local similarities and differences in the variations. We propose a novel graph representation to provide an overview of the topological changes. Our solution involves a topological similarity measure, which allows the user to select variations at a global level based on their degree of similarity. After a selection is made, local topological differences can be interactively explored using illustrations and topology graphs. We also incorporate additional information regarding the probability of the various cases. Our solution has several advantages over traditional approaches, which we demonstrate in an evaluation.
Medical visualization isn’t always about rendering data in the most realistic way possible. In fact, quite often it isn’t. Illustrative rendering techniques have been developed that present medical data to the viewer in a style that is more abstract, often emphasizing important features, while taking emphasis away from the less relevant. Illustrative rendering techniques can even make data originating from medical imaging scanners such as CT, look hand-drawn. Check out for instance the works by Tobias Isenberg, Stefan Bruckner, Ivan Viola and Kai Lawonn to name a few. As you may recall, last year’s VCBM paper I had the pleasure of co-authoring also involved illustrative rendering:
So what you see here kind of looks like a sketch hinting at the shape of the human body on a table right (well with some limbs missing ;))? Actually it’s a rendering of a CT scan combining toon shading and feature lines without any artists involved. Here’s another example by Roy van Pelt:
He visualized blood flow using particles that get elongated as they move faster in combination with speed lines indicating the direction and speed of the particles. This style is reminiscent of something you could see in comic books or cartoons. In fact, I think there is a lot we can learn from comic book artists that we can apply to medical visualization.
For this reason, I asked Gerrit Rijken, AKA Iosua, a freelance illustrator as well as comic book aficionado, to write about the basics of inking, and more specifically the do’s and don’t’s and why’s of it all. In his elaborate post, which you can find here, he talks us through many things that are of interest in illustrative rendering as well. He describes how inking techniques are applied to create textures, the importance of line weights, spotting blacks, screen tones, feathering and cross-hatching. He provides explanations of these techniques combined with examples illustrating the concepts.
I see many parallels to visualizing techniques. For instance, focus-and-context techniques and depth cues are relevant for both comic books and medical visualizations. There are also some interesting rules on line thickness that I had not considered before. Furthermore I see techniques that have already been adopted in medical illustrative rendering, such as stippling and hatching, while I did not see researchers using feathering yet (correct me if I’m wrong ^^). I hope you find this as interesting as I did and if you have further questions, do not hesitate to contact him!
So I guess we all know rainbows are bad, mmmkay. That’s why I wrote a lengthy blogpost for medvis.org and more recently a follow-up featuring four new pretty colormaps designed for Matplotlib. I realized though that I did not see the prettiness applied to medical datasets so I went to town with Paraview 5.0 (which includes the new colormaps by default now). I took these colormaps for a spin on a slice of the CALIX dataset from the OsiriX dataset collection, an arterial phase CT scan of the abdomen (and part of the thorax as you see below). I wanted to put the four new colormaps (Magma, Inferno, Plasma, and Viridis) side-by-side with the traditional grayscale and Jet (AKA rainbow) colormap to show what they look like on a medical dataset:
Looks kind of artistic, am I right? I would totally buy a print and frame it! So the gray scale is doing fine of course, and what we are accustomed to in medical images. The banding in the Jet colormap focuses the attention on the vertebra and ribs, which is probably not what you are actually interested in. Also information is lost in the soft tissue areas (muscle and fat). The four proposed colormaps are doing fine, though Plasma and Viridis start from blue tones, which is probably not what you expect when viewing on a black background, but can work well in case the low intensity regions contain information you actually want to show on a black background. I would love to compare with Matlab’s Pareto, but hey, proprietary, what’s a girl to do ^^
Now, of course, there is not much reason to replace the gray map when viewing single modality medical imaging data, but it could be interesting to consider for multimodal fusion viewing. For instance, when PET and CT are combined (either from a hybrid scanner or by software registration), often the anatomical CT data is represented in a gray colormap, while the functional metabolic information from the PET is shown overlaid in a color colormap (lol). This works well because both colormaps can be perceived simultaneously very well. For this, often a heated-body colormap is used, but you could also consider these four new options, as this is not a standardized choice and varies per manufacturer.
So far I’m talking about 2D representations, such as the traditional slice views above. When rendering 3D scenes though, and for instance wanting to map a scalar value on a surface to a color, one should be careful with these new maps. Since they are not iso-luminant, the changes in luminance may interfere with surface shading from lighting. Let’s take a look at a silly example:
This is a bit of a stupid example, because I don’t have proper data I can show you (I am only working with the most top-secret rectum information). But what we see here is a surface model of the os coxae with the normal Z component mapped to the Magma colormap. So what’s shading and what’s normal information in this case? Who’s to say? I am obviously:
So this is why you need iso-luminant colormaps for 3D surface information mapping, kids! Also, consider the Plasma, Inferno, Magma and Viridis for your 2D visualizations because:
They are beautiful
They are colorblind safe
They are printer-friendly (this means printers become happy when you print them (j/k, it means when you print grayscale it still works))