Technical ReportPDF Available

Abstract and Figures

Eye-Dome Lighting (EDL) is a non-photorealistic, image-based shading technique designed to improve depth perception in scientific visualization images. It relies on efficient post-processing passes implemented on the GPU with GLSL shaders in order to achieve interactive rendering. Solely projected depth information is required to compute the shading function, which is then applied to the colored scene image. EDL can, therefore, be applied to any kind of data regardless of their geometrical features (isosurfaces, streamlines, point sprites, etc.), except to those requiring transparency-based rendering.
Content may be subject to copyright.
SOFTWARE DEVELOPER’S QUARTERLY Issue 17 April 2011
The Kitware Source contains articles related to the develop-
ment of Kitware projects in addition to a myriad of software
updates, news and other content relevant to the open source
community. In this issue, Alejandro Ribes and Christian
Boucheny discuss the use of Eye-Dome lighting in ParaView
and how it can improve depth perception in the visualiza-
tion of 3D datasets. Luis Ibanez explores the use virtual
environments in different applications and the opportuni-
ties they can provide. Marcus Hanwell gives an overview of
the pointer classes in the Visualization Toolkit (VTK) and the
need each satises in VTK. Zach Mullen talks about MIDAS’s
use of Git and explains how to utilize macros to manage
the synchronization of data between the MIDAS server and
source repositories. In Kitware News, Will Schroeder reects,
in honor of Kitware’s 13th anniversary, on the factors that
have enabled Kitware’s growth over the years.
The Source is part of a suite of products and services
offered to assist developers in getting the most out of our
open source tools. Project websites include links to free
resources such as: mailing lists, documentation, FAQs and
Wikis. Kitware supports its open source projects with text-
books, consulting services, support contracts and training
courses. For more information, please visit our website at
www.kitware.com.
Editor’s Note ........................................................................... 1
Recent Releases ..................................................................... 1
Eye-Dome Lighting: A Non-Photorealistic Shading
Technique ............................................................................... 3
Virtually Everywhere ............................................................ 5
A Tour of VTK's Pointer Classes ............................................ 8
Hosting Binary Files on MIDAS to Reduce
Git Repository Size ................................................................ 9
Kitware News ...................................................................... 11
ITK MODULARIZATION IN ITK 4.0
One of the major undertakings of the upcoming ITK 4.0
release is the modularization of ITK. Modularization is the
process by which the many classes of ITK will be grouped
into smaller and cohesive components. We will refer to those
components as modules. This grouping will enable users to
select a subset of those components to be used for support-
ing the development of their application.
The rationale for modularizing the toolkit is the following:
Growth management
Raising the bar of software quality
Removing outdated pieces of software
Facilitating the use of add-ons to ITK
The need for managing the growth of the toolkit is clearly
illustrated in the gure below. Active development of the
toolkit was funded by the National Library of Medicine
from 1999 to 2005. After that period, only maintenance of
the software was funded. Despite the fact that after 2005
there was no direct funding for development, the toolkit
continued to grow in an almost linear way. This fact in itself
demonstrates that the toolkit was adopted and cared for
by a large community of contributors. We anticipate that
the toolkit will continue to grow at a similar rate, as new
algorithms appear in the literature and are ported to ITK.
We must therefore prepare a process of organized growth
ensuring the manageability of the software despite its size.
Modularization will also facilitate the evaluation of software
quality control metrics at a ner granularity, such as code
coverage, valgrind errors, doxygen documentation warnings
and coding style. This ner granularity of reporting enables
developers to work on problems more effectively, since the
2
effort can be focused in a given module, and the outcomes
of the effort is more clearly veried without being confused
with the state of the many other les in the system.
The process of modularizing the toolkit is similar to the
process of moving from one home to another. It gives us an
opportunity to nd may forgotten objects and to reconsider
whether we want to keep them or not. We have identied
a number of classes to be deprecated, and have also remade
the entire CMake infrastructure of the toolkit, now taking
advantage of the most recent functionalities of CMake 2.8.3.
Finally, we will congure the modularization in such a way
that institutions can offer additional ITK modules to third
party users, essentially creating an open ITK market for
add-ons. The ITK ecosystem will then expand to include spe-
cialized modules that may be of interest to a subset of the
community and that traditionally may not have been con-
sidered general enough to be included in the toolkit itself.
One of the key aspects of the modularization is the formal-
ization of dependencies across modules. The gure below
illustrates these dependencies by taking advantage of the
Information visualization functionalities of VTK.
Application developers will be able to prepackage the subset
of ITK that they actually use, and thus reduce the burden of
maintenance of their complete software environment.
We look forward to the feedback of the ITK commu-
nity regarding their use cases for modularization and
suggestions for making the toolkit a more useful and
maintainable resource for the decade to come. Please see
http://www.itk.org/Wiki/ITK_Release_4/Modularization
CMAKE 2.8.5
The CMake version 2.8.5 release is scheduled for April
2011. The bug x road map for version 2.8.5 can be found
at http://public.kitware.com/Bug/roadmap_page.php. The
change log for bugs that have already been xed is located
at http://public.kitware.com/Bug/changelog_page.php.
Please try the release candidates for CMake 2.8.5 to build
your projects as they become available in April. Let us know
on the mailing list if you run into anything unexpected.
MIDAS
The team working on MIDAS has just released updates for
the MIDAS Server and MIDAS Desktop. The MIDAS Server
2.8.0 release includes improved server-side processing via
the BatchMake plugin, enhanced API to support the MIDAS
desktop and an updated shopping cart for data download
and aggregation. Additionally, we have enhanced its stabil-
ity on Windows servers, and added support for les greater
that 4 GB. Also of note is the addition of hash-addressed le
downloads via the web API and user agreements to afford
community administrators greater exibility in data dissemi-
nation. Lastly, MIDAS has moved to Git for version control.
MIDAScpp 1.6.0 has added several new features. The
MIDASDesktop GUI layout has been redesigned for usability,
progress reporting during upload and download has been
improved and a one-step upload command to MIDAScli
has been added. There are now options in the Preferences
menu to copy or move the entire resource tree to a new
location and the client tree is now updated dynamically as
data is pulled. Many aspects of MIDASDesktop that are slow
have been multithreaded and MIDASDesktop now actively
monitors the lesystem so that changes to les under local
database control will be recognized automatically by the
application. Metadata elds have been added, including
total size of communities, collections, and items and can be
updated with rich text editors.
Additionally, with this new release, users can:
Import existing data into their database without having to
pull it from the server.
Cancel long running network-based operations in
MIDASDesktop, such as pulls and refreshes.
Create a new empty local database at any time via
MIDASDesktop.
Pull data from the server by dragging and dropping
resources between trees.
Authenticate using username and password instead of
with a web API key.
IGSTK
The Image-Guided Surgery Toolkit team released IGSTK 4.4
in February. This minor release has several new features,
including support for Ascension's 3D Guidance trackers
(medSAFE, driveBAY and trakSTAR); PET image readers,
spatial object and representation classes; and PET/CT fused
image and electromagnetic tracker-guided needle biopsy
application examples. The build instructions and new release
download can be found on the IGSTK public wiki.
A PET/CT guided needle biopsy prototype application: PET (hot
metal color mapping) overlaid on top of CT (grey scale). The center
of the PET hot spot is the target, shown in red. Also shown are the
needle (green tube), virtual tip (yellow tube) and target (red ball).
3
Eye-Dome Lighting (EDL) is a non-photorealistic, image-
based shading technique designed to improve depth
perception in scientic visualization images. It relies on
efcient post-processing passes implemented on the GPU
with GLSL shaders in order to achieve interactive rendering.
Solely projected depth information is required to compute
the shading function, which is then applied to the colored
scene image. EDL can, therefore, be applied to any kind of
data regardless of their geometrical features (isosurfaces,
streamlines, point sprites, etc.), except to those requiring
transparency-based rendering.
In this article, we rst briey describe EDL and then give some
details about how it has been integrated into ParaView. EDL
was developed by Christian Boucheny during his Ph.D [1]. Its
original aim was to improve depth perception in the visual-
ization of large 3D datasets representing complex industrial
facilities or equipments for Electricité de France (EDF).
Indeed, EDF is a major European electrical company where
engineers visualize, on a daily basis, complex data such as 3D
scans of power plants or results from multi-physics numerical
simulations.
WHAT IS EYE-DOME LIGHTING?
Shading occupies a special place among the visual mecha-
nisms serving to perceive complex scenes. Global illumination
models, including a physically inspired ambient occlusion
term, are often used to emphasize the relief of surfaces and
disambiguate spatial relationships. However, applying such
models remains costly, as it often requires heavy pre-com-
putations, and is thus not suited for an exploratory process
in scientic visualization. On the other hand, image-based
techniques, such as edge enhancement or halos based on
depth differences, provide useful cues for the comprehen-
sion of complex scenes. Subtle spatial relationships that
might not be visible with realistic illumination models can
be strengthened with these non-photorealistic techniques.
The non-photorealistic shading technique we present here,
EDL, relies on the following key ideas.
Image-based lighting: Our method is inspired by ambient
occlusion or skydome lighting techniques, with the addi-
tion of viewpoint dependency. Contrary to the standard
application of these techniques, in our approach the com-
putations are performed in image coordinate space, using
only the depth buffer information, like in Crytek Screen-
Space Ambient Occlusion (SSAO) [2]. These techniques do
not require representation in object coordinate space, and
thus there is no need for knowledge of the geometry of the
visualized data or for any preprocessing step.
Locality: The shading of a given pixel should rely predomi-
nantly on its close neighborhood in image space, as the
effects of long range interactions will not likely be initially
detected by the viewers.
Interactivity: Our primary concern is to avoid costly opera-
tions that would slow interactive exploration and thus limit
the comprehension of the data. Due to the evolution of
graphics hardware, a limited set of operations performed on
fragments appears to be the most efcient approach.
Figure 1 presents a diagram of the architecture that inte-
grates EDL. The algorithm requires a projected color image of
the 3D scene and its corresponding depth buffer. The depth
buffer is the input of the EDL algorithm, which generates a
shadow image to be combined with the color rendering of
the scene (e.g., by multiplying each pixel's RGB components
by its EDL-shading value).
Figure 1. Depiction of the rendering architecture integrating Eye
Dome Lighting. A 3D scene (left colored surface) is rst projected
using an off-screen OpenGL rendering. The resulting color and
depth images are stored in two 2D textures. The depth image is
then used to calculate the shading by applying the EDL algorithm.
The result is used to modulate the color image that is nally drawn
on the screen (e.g., by multiplying each pixel's RGB components by
its EDL-shading value).
The basic principle of the EDL algorithm is to consider a half-
sphere (the dome) centered at each pixel p. This dome is
bounded by “a horizontal plane,” which is perpendicular to
the observer direction at point p. The shading is a function
of the amount of this dome visible at p, or conversely, it is
inversely determined by the amount of this dome hidden
by the neighbors of p (those being taken on a ring around
p in image space). In other words, a neighbor pixel will
reduce the lighting at p if its depth is lower (i.e. closer to the
viewer) than the one of p. This procedure denes a shading
amount that depends solely on the depth values of the
close neighbors. To achieve a better shading that takes into
account farther neighbor pixels, a multi-scale approach is
implemented, with the same shading function being applied
at lower resolutions (typically half and quarter image size).
Those shaded images are then ltered in order to limit
aliasing induced by lower resolution, using a cross bilateral
lter [3] (Gaussian blur modulated by depth differences),
and then merged together with the full resolution shaded
image. This approach is graphically represented in Figure 2.
This gure presents two levels of resolution corresponding
to the ParaView implementation. In general, more multi-
resolution levels can be used if desired.
Figure 2. The shading function is computed at full resolution and a
lower resolution. The nal rendering (right) is a weighted sum of
the two shading images, with a cross bilateral lter (Gaussian blur
modulated by depth differences) applied to the lower resolution
map to prevent aliasing and achieve a “halo” effect.
EYE-DOME LIGHTING:
A NON-PHOTOREALISTIC SHADING
TECHNIQUE
4
Figure 3. An image produced on EDF physical simulation data
rendered with point sprites illustrates the effect of EDL (left) com-
pared to basic Phong shading (right).
COMPILING AND USING EDL IN PARAVIEW
Eye-Dome Lighting shading is implemented in ParaView as a
plugin. The code can be found in the ParaView source tree
in Plugins/EyeDomeLighting. Before the build of the system
is performed, a new variable PARAVIEW_BUILD_PLUGIN_
EyeDomeLighting, must be switched to ON in the CMake
interface to allow its construction.
Once the system is built, the plugin can be loaded by using
Manage Plugins in the tools menu. The dynamic library of
the EDL plugin is currently called “libRenderPassEyeDome-
LightingView.” When loading it, a new type of view, named
“Render View + Eye Dome Lighting,” will appear in the list
of available views. Simply select it and any 3D data loaded
in the view will be shaded with EDL. As an example, Figure
3 shows the visualization in ParaView of point sprites for a
mechanical simulation performed at EDF R&D. Note that up
to now, the EDL shading function is superimposed on the
classical view (Gouraud shading), because of the ParaView
specic material pipeline. Modifying the latter would permit
the denition of miscellaneous shadings with a greater
degree of exibility.
PLUGIN ARCHITECTURE IN PARAVIEW
The EDL algorithm is implemented as a vtkImageProcessing-
Pass. This allows us to call the algorithm from the plugin in
the following way:
void vtkPVRenderViewWithEDL::Initialize
(unsigned int id)
{
this->Superclass::Initialize(id);
vtkEDLShading* EDLpass =vtkEDLShading::New();
this->SynchronizedRenderers->
SetImageProcessingPass(EDLpass);
this->SynchronizedRenderers->
SetUseDepthBuffer(true);
EDLpass->Delete();
}
Looking at this code, we can already describe some impor-
tant implementation details.
The plugin itself consists of a vtkPVRenderView, which
is a ParaView view. The view contains a new method
SetImageProcessingPass that will insert our algorithm
in the correct position in the visualization pipeline. This
framework for post-processing image passes was recently
added by Kitware for EDF R&D . However, the position
where the Imaging Pass is inserted does not currently allow
transparency to be applied properly. This is due to the more
complex pipeline design for transparency rendering, which
relies on depth peeling, and further development in VTK
should be done if this is needed.
A method SetUseDepthBuffer has been added to vtkPVSyn-
chronizedRendered to switch on the use of depth buffers. In
fact, EDL uses a depth buffer, but most algorithms do not.
Always switching on the use of the buffer could lead to a
slowing down of the system when the depth buffer is not
needed. To avoid this problem, SetUseDepthBuffer is pro-
vided. The user is then responsible for activating the depth
buffer if his/her algorithm requires it. Its default value is off.
The use of the depth buffer by EDL was a main challenge
for the integration of EDL in ParaView. Indeed, the plugin
should work in standalone, client-server and parallel-server
mode. Moreover, tiled displays are also taken into account.
Some developments were performed to allow parallel com-
positing of the depth-buffer using IceT (the parallel library
used in ParaView for parallel compositing operations).
Exposing this functionality from IceT to the render passes
was the main object of a collaboration between EDF R&D
and Kitware for the proper implementation of EDL.
SHADING ALGORITHMS
The vtkEDLShading is based in another class called vtkDep-
thImageProcessingPass. This class contains some general
methods that are not specic to the EDL algorithm and can
be used to implement other algorithms. For instance, we
implemented an image-based ambient occlusion shading
algorithm (based on Crytek SSAO) and a ParaView view
based on it (not currently included in the plugin). Thus, any
user could derive a class from vtkDepthImageProcessingPass
to implement this kind of algorithm.
ACKNOWLEDGEMENTS
The EDL algorithm is the result of joint work by Electricité de
France, CNRS, Collège de France and Université J. Fourier as
part of the Ph.D. thesis of Christian BOUCHENY.
REFERENCES
[1] Boucheny, C. (2009). “Interactive scientic visualization
of large datasets: towards a perceptive-based approach”
PhD, Université Joseph Fourier (in French only).
[2] Kajalin, V. (2009). “Screen Space Ambient Occlusion.”
In W. Engel (Ed.), ShaderX7 - Advanced Rendering
Techniques. Charles River Media.
[3] Eisemann, E., and Durand, F. (2004) “Flash
photography enhancement via intrinsic relighting.”
In ACM Transactions on Graphics, 23(3):673-678.
Alejandro Ribés is currently working in sci-
entic visualization at EDF R&D. He has
experience at the Univerity of Oxford, U.K;
the French Atomic Energy Commission,
Orsay, France; and the National Yang-Ming
University, Taipei, Taiwan. He received his
Ph.D. from the Ecole Nationale Supérieure
des Télécommunications, Paris, France.
Christian Boucheny is a research engineer at
EDF R&D, France, specialized in scientic
visualization and virtual reality for mainte-
nance and training. He received his Ph.D. in
2009 from the University of Grenoble,
where he worked on perceptual issues
related the visualization of large datasets.
5
VIRTUALLY EVERYWHERE
WHAT IS A COMPUTER?
There was a time when it was easy to answer this question,
a time when a computer was mostly a physical, or hardware,
device (see Figure 1). It used to be that on top of that hard-
ware device, a thin layer of logic was used to control the
tasks performed by the physical layer.
Figure 1. The ENIAC, which became operational in 1946,
is considered to be the rst general-purpose electronic
computer. It was a digital computer that was capable of
being programmed to address a range of comupting issues
(http://en.wikipedia.org/wiki/ENIAC).
There have been signicant changes since those early days.
Modern computers are an amalgam of physical hardware
complemented with layer upon layer of abstraction logic.
Users of modern computers interact with the higher abstrac-
tion layers and no longer get exposed to the details of the
lower layers. This multi-layered organization has made it
possible to intercept the middle layers and fully disconnect
the operating system from the actual physical device in
which it is running. As a consequence, what we used to call
“the computer” is now disembodied from the physical layer
and can therefore be moved across different receptacles
where it can be incarnated.
This technology, in general called virtualization, has recently
transitioned from advanced and exclusive to mainstream.
We have therefore started taking advantage of it, mostly in
regards to disseminating the use of open-source software.
The beauty of virtualization is that the virtual computer
is literally a stream of bytes, and therefore can be stored,
copied, modied and redistributed just as any other digital
good. Granted, it tends to be quite a large le, but so are
movies in digital formats.
The main scenarios in which we have recently been using
virtualization are:
Teaching
Debugging
Providing reference systems
Running the reproducibility verication of the Insight
Journal
Trying new OS distribution without committing to fully
reinstalling our computer
Here we elaborate on our experience using virtualization in
some of these scenarios.
DEBUGGING
Despite being aware of virtualization for some time, the
event that sparked our attention came as a secondary effect
of teaching our class, “Open Source Software Practices,”at
Rensselaer Polytechnic Institute. In order to expose students
to the inner working practices of an open source community,
they are required to work in a joint class software project
and then they work in small groups projects. In a recent
class, for the joint project we chose to work with Sahana
[1], a piece of humanitarian free and open source software
(HFOSS)[2], designed to coordinate the delivery of humani-
tarian relief to disaster zones.
One of the easiest ways to get introduced to the system was
for the students to sign up with the Sahana bug tracking
database and select easy bugs that could be tackled in a
couple of days. The Sahana development team has done an
excellent job of preparing an easy entry path for new con-
tributors. One of the most remarkable items in that reception
area was the presence of a virtual appliance with a fully
congured build of Sahana, along with a minimal mockup
database intended for testing purposes [3]. Students were
able to download the virtual appliance (for VirtualBox),
boot it in their own laptops, and be working on xing a bug
in a matter of minutes. This was an eye-opening experience.
We know quite well, from our maintenance experience
with our open source toolkits, that bringing new develop-
ers into an open source project is not a trivial feat. The fact
that Sahana succeeded so well in delivering a “portable”
platform (in the form of these virtual machines) that new
developers can take and use instantly without installing
additional software, dealing with version incompatibilities,
or having to get familiar with the installation details of a
database and an associated web server such as permissions
and policies, make this approach a clear winner.
One of the very appealing properties of this method is
that the new developers do not need to compromise the
conguration of their current computers just to work on a
particular bug of a given project. We have seen on many
occasions that the effort to replicate the conditions in which
a bug happens may require the installation of specic ver-
sions of libraries or software tools and their conguration in
a particular fashion. Over time, the computers of developers
who must do this on a continuous basis end up having an
overwhelming mixture of installed libraries, which can easily
generate conicts and obstruct maintenance. With a virtual
machine, on the other hand, the process is perfectly clean.
The machine is downloaded and booted; the bug is explored
and xed; a patch is submitted; and the virtual machine is
shut down and discarded. The developer then returns to a
clean computer, with no secondary traces or inconvenient
remnants from his recent bug xing excursion.
TEACHING
When teaching a course to a group of, let’s say, 30 people,
it is highly desirable to ensure that all of them have the
software correctly installed, a similar directory structure, and
6
access to the necessary source code and eventually any other
binary tools that may be needed for the course. For example,
a typical ITK course will require you to have the source code
of ITK installed, the proper version of CMake, a well cong-
ured compiler and a set of data les suitable to be used as
input for hands-on exercises. This has to be done despite the
fact that attendees will use their personal laptops for the
course, and therefore will have a large variety of hardware
platforms, operating systems and development environ-
ments installed on them.
In this context, virtualization offers an interesting alterna-
tive. Should a virtualization application be installed in the
course attendee’s computers, it becomes possible to give to
each one of them a virtual appliance that has been care-
fully crafted to contain all the software tools needed for the
course. Such an appliance can be delivered to attendees in
the form of a USB memory stick or a traditional CD.
The Microscopy Tutorial at MICCAI
At the MICCAI 2010 conference, we delivered a tutorial
on “Microscopy Image Analysis.” As usual, following a
pragmatic approach to training, we wanted to incorporate
hands-on exercises in this tutorial, but we were challenged
by the need to install a full application (V3D developed by
Hanchuan Peng’s team at the HHMI - Janelia Campus) along
with a full build of ITK and the set of input data required to
run exercises.
The computers used for the tutorial, however, were the
laptops that attendees brought as their personal machines
to the conference. This was a double challenge. First, a
wide variety of different machines was used (Macs, Linux
and Windows), and second, the congurations of these
machines was vastly different. They had different versions of
operating systems and different types and versions of build
systems. Virtual machines were therefore a natural choice to
isolate that heterogeneity from the uniform platform that
we needed to use for delivering a common experience to the
course attendees.
The preparation for the course used two independent
stages. The rst step was installing the virtualization soft-
ware (in this case VirtualBox, from Oracle). The second
step was installing the image of the actual virtual machine
(also known as a “Virtual Appliance”). This preparation of
the virtual machine certainly requires considerable time
and attention, but it has the advantage that its outcome
becomes reusable and redistributable.
The slides used for this tutorial [4] and the virtual machine
[5] are available on the MIDAS database.
In this particular case, the le containing the virtual machine
is about 2 GB in size, but there are better ways to compact a
virtual appliance than what we used in this particular case.
This virtual appliance can be run both in a VirtualBox appli-
cation and in a VMWare server.
Virtualization Application
Host Operating System
Host Hardware
Guest Operating System
VirtualBox runs as a standard application on top of the operating
system of the host machine. The VirtualBox application can launch
the image of a precongured virtual machine in which a guest
operating system has been installed.
Our use of virtual appliances in the MICCAI Microscopy
tutorial was so rewarding that we will be using them again
for the following upcoming tutorials:
CVPR 2011: Video Bridge between ITK and OpenCV
MICCAI 2011: Microscopy Image Analysis
MICCAI 2011: Simple ITK : Image analysis for human beings
MICCAI 2011: ITKv4 : The next generation
INSIGHT JOURNAL
The Insight Journal is the vehicle for members of the ITK
community to share their own developed classes with others.
One of the most unique characteristics of the Insight Journal
is that it is the only Journal that veries the reproducibility
of the work that is described in a paper. In order to do this,
authors must include with their papers the full set of source
code, data and parameters that enable others to replicate
the content of the paper. The system run by the Insight
Journal takes this source code, compiles it, runs it, and nally
compares the generated data with reference data provided
by the authors. Given that the Journal receives submissions
from any registered user, and that registration requirements
are minimal, the implementation of the Journal translates
into: “Here is an online system in which we are going to
take source code submitted by anyone on the Internet, and
we will compile it and run it.” This is something that is not
necessarily the most prudent thing to do.
In order to restrict the risk of damage, whether by mali-
cious or defective code, a Xen virtualization platform was
put in place. In this platform, a Linux virtual machine in
which commonly used libraries and software tools have
been preinstalled is instantiated from scratch in order to
test each individual paper submission. In this way, every
paper is tested in a uniform environment that is in pristine
condition. Should anything go wrong with the build or the
execution process, the damage is contained because the
instantiation of the virtual machine is discarded after the paper
verication terminates.
7
The virtualization environment also enables us to create a
safe “walled garden” in which the code is tested with limited
access to risked services such as networking. The image of
the virtual machine is updated regularly to include recently
released versions of ITK, VTK and CMake, among other tools.
Given that in some cases, users may want to further cus-
tomize the conguration in which the paper source code is
evaluated, we are considering the option in which authors
can create the submitted paper by taking a publicly available
copy of a pre-congured virtual appliance, then proceed to
customize it by installing additional software, including the
one that implements the content of their paper, and nally
pack the resulting new virtual appliance and submit it as a
paper to the Insight Journal.
This of course will have the overhead of transmitting and
storing very large les, but it also provides a whole new
horizon of possibilities when it comes to the richness of
content that can be made part of a technical or scientic
publication. It would be one step closer to achieving an
ideal environment for verication of reproducibility in
computational sciences.
THE CLOUD
Cloud environments are yet another implementation of
virtualization technologies. In this case, the cloud provides
three main components:
A repository of virtual machine images.
A computation service in which users can request hard-
ware on a pay-per-use basis.
A storage service in which users can deposit data and pay
per data size and transfer.
These platforms enable us to provide preinstalled virtual
machine images with a congured and built version of ITK,
that users can instantiate for their own testing. Prices of
running machines in the cloud are in the order of $1 per
hour, and users only pay for the time between the instantia-
tion of the machine and when it is shutdown.
Users in the cloud can also take pre-existing virtual machine
images, modify them (for example to install in them addi-
tional software), and then put these new images back into
the cloud repository for others to use. A permission system
makes it possible to make some of these images fully public,
or to restrict access to a limited set of users.
Cloud computing is a virtualization paradigm in which a
collection of computing resources are made available to
pay-per-use customers in the form of virtual computers. The
cloud service provider (for example Amazon EC2, Rackspace
or Microsoft Azure) actually owns a large collection of
hardware equipment distributed in different geographical
locations. Those hardware devices have been congured to
be able to run virtual computers at the request of customers.
A virtual computer is equivalent to a one-to-one copy of the
byte data existing in the hard drive of any modern desktop
or laptop. Such a copy includes not only the software appli-
cations that users interact with, but also the operation
system layers that normally interact with real hardware. In
the context of cloud computing, those virtual machines are
essentially run as emulated computers in an environment
where the emulation presents a minimum overhead.
Customers of the service can choose among many pre-
congured virtual computers (also known as “images”), and
can choose to instantiate them on hardware platforms of
different capacity (memory, number of processors, disk
space), also known as “instances”. User can also select to
instantiate as many of these virtual devices as they need,
release them or reduce them according to their usage needs,
and all along, only pay for the resources that they are using.
Software infrastructure is available for performing this
scaling automatically, according to the load that an applica-
tion may be experiencing.
Open-source software platforms are a natural t for cloud
computing environments because open-source software is
not crippled by licensing limitations, and therefore can be
copied, instantiated, modied and redistributed without
any legal concerns.
By storing scientic data in cloud storage services, it becomes
available directly to cloud computing devices without
further transfer of data. Modern cloud storage providers
offer multiple options for uploading large amounts of data,
ranging from a high-speed multi-channel upload network
to mail-shipped high-capacity storage media (such as multi-
terabyte hard drives), that is still the most cost-effective way
of transferring large amounts of data. Once customers have
uploaded data to the cloud, they can make it available to
the virtual machines they instantiate to process it.
We currently host an “image” in the Amazon EC2 elastic
computing service. You could instantiate that image and
have a functional computer in which an ITK source tree and
it corresponding binary build are already available.
CONCLUSION
The luxury of being able to congure, pack and ship around
the digital version of a fully congured computer gives
us plenty of opportunities to address in a more effective
manner the challenges of large scale software development
and the joys of building communities around them.
REFERENCES
[1] http://sahanafoundation.org/
[2] http://hfoss.org/
[3] http://eden.sahanafoundation.org/wiki/
InstallationGuidelinesVirtualMachine
[4] http://midas.kitware.com/collection/view/30
[5] http://midas.kitware.com/item/view/450
RESOURCES
If you are interested in trying some of the available pre-
congured images, the following resources are helpful:
VirtualBox
The community provides the following two resources.
http://virtualboxes.org/images/
http://virtualboximages.com/
VMWare
http://store.vmware.com/store/vmware/
en_US/DisplayProductDetailsPage/productID.221027300
Luis Ibáñez is a Technical Leader at Kitware,
Inc . He is one of the main developers of the
Insight Toolkit (ITK). Luis is a strong sup-
porter of Open Access publishing and the
verication of reproducibility in scientic
publications.
8
A TOUR OF VTK’S POINTER CLASSES
One way in which VTK’s API differs from that of some other
libraries is that all vtkObjectBase-derived classes must be allo-
cated from the heap. This means that we tend to deal with
raw pointers a lot more often than those developing code
using other libraries and frameworks, such as the standard
template library, Boost and Qt. This has led to the addition
of several templated classes to make pointer management,
along with allocation and deallocation, easier.
AUTOMATIC ALLOCATION
The newest addition to the set of pointer classes is vtkNew,
which is designed to allocate and hold a VTK object. Due to
the nature of VTK constructors (no arguments), it is a very
simple class; on construction it allocates a new VTK object of
the type specied, and on destruction it will call the Delete()
method of the class.
vtkNew<vtkPoints> points;
points->SetDataTypeToDouble();
This class effectively maintains ownership of the object,
provides a very compact way of allocating VTK objects, and
assures that they will be deleted when the pointer goes out
of scope in much the same way as stack allocated objects.
The class is new in VTK 5.8.0. To pass the raw pointer to other
classes it is necessary to use the GetPointer() method,
myObject->SetPoints(points.GetPointer());
This class can also be used as a member variable in classes
where the class contains instances of other VTK classes
that should be allocated on construction, and deallocated
on destruction. It is not necessary to include the header of
the class the vtkNew pointer will use in the header, just the
implementation le containing the constructor denition. If
the member variable can be changed in the API of the class
(for example through a Set...() method), vtkSmartPointer
would be a better choice.
SMART POINTER
This is the oldest of the pointer classes, providing a con-
tainer that holds a reference to a vtkObjectBase reference.
The vtkSmartPointer class is templated and derived from
vtkSmartPointerBase. It adds automatic casting for objects
held by its superclass. To allocate a new VTK object and store
it in a vtkSmartPointer use
vtkSmartPointer<vtkPoints> points =
vtkSmartPointer<vtkPoints>::New();
Using the contained object works in much the same way as
using vtkNew.
points->SetDataTypeToDouble();
This pointer class also allows implicit casts to the underlying
pointer, and so to pass the instance to another class is a little
simpler as it looks just like a normal pointer to the function,
for example,
myObject->SetPoints(points);
The vtkSmartPointer class can also be used as a member vari-
able. It is more suitable for the case where the class has API
to set what the member variable is pointing to, while main-
taining ownership of that class. On assignment, the smart
pointer will increment the reference count of the object,
and on destruction (or allocation of a different instance to
the smart pointer) the reference count will be decremented.
This can be especially helpful in the case of things such as
input data, where the user of a class can set vtkImageData,
for example, as the input, which is stored by the class in a
member variable of type vtkSmartPointer<vtkImageData>
and can then be used later. If the image that was passed in
is later deleted the smart pointer assures that the reference
count is still non-zero and the object remains valid. It can
cause issues with reference loops in some cases, and there
are also situations where it is more helpful to simply hold
on to a pointer that will become null should the instance be
deleted. The vtkWeakPointer class addresses these use cases.
WEAK POINTER
The vtkWeakPointer class is the third of our pointer classes,
providing a weak reference to a vtkObject. This means that
assigning to the vtkWeakPointer does not increase the
reference count of the instance. When the instance being
pointed to is destroyed, the vtkWeakPointer being held on
to gets set to null, avoiding issues with dangling pointers.
It is especially useful in cases where you need to hold on
to a pointer for an instance, but there is no need to keep it
around should it be deleted. This often makes it far easier to
avoid any reference loops too.
vtkTable *table = vtkTable::New();
vtkWeakPointer<vtkTable> weakTable = table;
The weakTable will remain valid until Delete() is called, so in
the following code weakTable will never evaluate to true.
table->Delete();
if (weakTable)
{
// We'll never get here, as table was deleted.
vtkIdType num = weakTable->Get NumberOfColumns( );
}
This makes it very easy to check for null before doing any-
thing with the pointer. It is again useful for member variables
of classes, in the case where there is no need to maintain
ownership of the instance being pointed to. It should be
noted that the above code is not thread safe.
USE OF POINTER CLASSES
As mentioned previously, each of the pointer classes can be
used for member variables in classes. They have different
attributes, and so the intended API around the data being
pointed to should be considered.
vtkNew: Strong ownership, where the class instantiates an
object which it owns for the lifetime of the class. The member
cannot be changed in place (much like stack variables).
vtkSmartPointer: Ownership of the instance is maintained.
The class may or may not instantiate the initial object and
the instance being pointed to can be changed.
vtkWeakPointer: Weak ownership, where the class does not
instantiate an object. If the object instance is deleted, then
the weak pointer is null, which avoids dangling pointers
without ownership.
A simplied class denition might look like,
#include “vtkObject.h”
#include “vtkNew.h”
#include “vtkSmartPointer.h”
9
Once the nTable and spTable objects go out of scope, the
reference count would drop to zero, and the wpTable would
be set to null. There is no assignment of objects to vtkNew,
and so other pointers cannot be assigned to it. Both the
vtkSmartPointer and vtkWeakPointer have implicit casting
to the pointer type of the class and dene the equality oper-
ator between them, and so the GetPointer() call is not strictly
necessary when converting between these two pointer types
and raw pointers.
CONCLUSIONS
Each of the classes has online documentation, and each
satises a specic need in VTK. They should make memory
management in your classes and applications simpler if used
correctly, reducing line counts and decreasing code complex-
ity. They are also suitable for use in classes, and only their
header needs to be included in the declaration le. The new
vtkNew class is especially useful in things like tests and small
sample applications where several classes must be allocated.
The vtkSmartPointer and vtkWeakPointer complement one
another where pointers must be held on to for later use.
Marcus Hanwell is an R&D engineer in the
scientic visualization team at Kitware . He
joined the company in October 2009, and
has a background in open source, Physics
and Chemistry. He spends most of his time
working with Sandia on VTK, Titan and
ParaView.
#include “vtkWeakPointer.h”
class vtkTable;
class vtkExample
{
static vtkExample * New();
void SetInputTable(vtkTable *table);
vtkTable * GetInputTable();
void SetColorTable(vtkTable *table);
vtkTable * GetColorTable();
vtkTable * GetTableCache();
protected:
vtkExample();
~vtkExample();
vtkNew<vtkTable> TableCache;
vtkSmartPointer<vtkTable> InputTable;
vtkWeakPointer<vtkTable> ColorTable;
};
The corresponding implementation le would contain,
#include “vtkExample.h”
#include “vtkObjectFactory.h”
#include “vtkTable.h”
vtkStandardNewMacro(vtkExample)
vtkExample::vtkExample()
{
this->InputTable =
vtkSmartPointer<vtkTable>::New();
}
vtkExample::~vtkExample() { }
void vtkExample::SetInputTable(vtkTable *table)
{
this->InputTable = table;
}
vtkTable * vtkExample::GetInputTable()
{
return this->InputTable.GetPointer();
}
void vtkExample::SetColorTable(vtkTable *table)
{
this->ColorTable = table;
}
vtkTable * vtkExample::GetColorTable()
{
return this->ColorTable.GetPointer();
}
vtkTable * vtkExample::GetTableCache()
{
return this->TableCache.GetPointer();
}
Note that there was no need to call Delete() on any of
the member variables. The vtkNew class allocates the vtk-
Table on construction, and that instance cannot be replaced
over the lifetime of the class instance. When the class is
destructed, the vtkNew object calls delete on the table.
Next, the vtkSmartPointer allocates an initial instance of
vtkTable, which can later be replaced using SetInputTable.
Finally vtkWeakPointer may be set using SetColorTable if it
is, it will point to that instance until it is destroyed.
POINTER CLASS INTERACTION
The pointer classes all work with each other. The vtkNew
class is ideal for allocation of new objects in a very concise
form. Since it just decrements the reference count when it
goes out of scope, it works as expected when used with the
other two pointer classes.
vtkNew<vtkTable> nTable; // Reference count of 1
vtkSmartPointer<vtkTable> spTable =
nTable.GetPointer(); // R e fer e nce co u nt o f 2
vtkWeakPointer<vtkTable> wpTable =
nTable.GetPointer(); // Reference count of 2
HOSTING BINARY FILES ON MIDAS
TO REDUCE GIT REPOSITORY SIZE
Many of the projects to which Kitware contributes have
recently switched to using the distributed version control
system Git as a replacement for older, centralized version
control systems such as SVN or CVS. Among these projects
are CMake, ITK, MIDAS, and VTK. One of the key advantages
of these version control systems is the ability to develop
projects using a "branchy" workow that allows developers
to work on related changes on a separate "topic branch"
and commit new features onto a separate branch that can
be tested and made stable before being integrated into
the "master" branch, which is considered stable. There are,
however, challenges to using Git, most notably the saving of
each version of a binary le and the resulting large reposi-
tory size, which is addressed in this article.
DISTRIBUTED VCS
The major difference in using a distributed VCS is that the
entire repository history is stored locally on each user's
machine, instead of being stored in only one central server.
When changes are committed to any text le, such as a
source le, the changes are stored as the line-by-line differ-
ence between the two les. For any binary le, however, it's
much more difcult to store the difference, so instead Git
simply stores each version of binary les in the repository in
their entirety. Naturally this causes the size of the history
to become unacceptably large if the repository contains
sizable binary les that change frequently. We needed an
alternative place to store binary les outside the reposi-
tory that could be referenced from within the source code.
10
The solution we created uses MIDAS as the place to host the
binary les. MIDAS provides a hierarchical organization of
data on the server that can emulate a lesystem structure,
and also provides access and administrative controls to the
data in each directory. MIDAS also provides, via its web API,
a mechanism for downloading a le stored on the server by
passing the MD5 checksum of the le's contents.
HOSTING AND REFERENCING BINARY FILES
In order to move les out of the source repository and onto
MIDAS, the rst step is to upload the les to the MIDAS
server. Once the les (called "bitstreams" in MIDAS) have
been uploaded to the server, we will remove them from the
source code repository and replace each removed binary le
with a "key le." This key le acts as a placeholder for the
real le; it simply contains the MD5 checksum of the actual
le's contents. A key le has the same name as the actual
le, with a ".md5" extension added to the end.
To get the key le corresponding to a le stored on the
MIDAS server, navigate in your browser to the item con-
taining the desired bitstream. Click the checkbox that says
"Advanced View," and a link titled Download MD5 Key File
will appear next to each bitstream in the list. These links can
be used to download individual key les.
Figure 1: Advanced view of MIDAS bitstreams
Alternatively, you can download all of the key les for an
item at once using the Item menu at the top.
Figure 2: Download all key les in an item
Choose "key les (.tgz)" or "key les (.zip)", depending
which compression format you prefer, and the keys will be
downloaded in a zipped directory to your machine. You can
then unzip them and copy them into the source repository
in place of the actual les. Each of these key les is text
and is only 32 bytes, so the overhead of storing them in the
repository is minimal.
The most common form of binary data in our source reposi-
tories is data used for automated testing, such as baseline
and input images. To allow the MIDAS key les to be used as
placeholders for real les, we created a CMake macro that's
a thin wrapper around the usual "add_test" command. The
main difference is that instead of referring to actual binary
les in the source tree, you can call this macro with a ref-
erence to a placeholder le. Then, at test time, all of the
les referenced as test arguments will be downloaded from
MIDAS just prior to running the test, and the test will be run
on the les that have been downloaded (by convention into
the build tree).
To use this macro, you'll need to add the following line in
your CMakeLists code:
include(MIDAS)
Additionally, you need to make sure that MIDAS.cmake is
in your CMake module path and set a few CMake variables
prior to running the macro.
set(MIDAS_REST_URL
"http://midas.kitware.com/api/rest")
The macro communicates with MIDAS via its rest API, so
you must specify the URL of the server from which you will
download the data.
set(MIDAS_KEY_DIR
"${PROJECT_SOURCE_DIR}/Testing/Data")
Set this variable to point to the top level directory where
you have stored your key les. You can keep key les in
the same nested directory structure you kept your old les
in; all references to key les will be relative paths to this
MIDAS_KEY_DIR directory.
set(MIDAS_DATA_DIR
"${PROJECT_BINARY_DIR}/Testing/Data")
This is an optional variable. This directory is the location
where the actual les will be downloaded at test time. By
convention, this should be placed outside of your source tree
so as not to pollute it.
Once you have set these variables, you may call the new
macro, midas_add_test(). This macro should be called with
the same parameters as you'd call add_test, but substitute
any references to moved les with a new type of reference
to the placeholder le. An example is shown here, taken
from the BRAINSTools module of Slicer4. The original call to
add_test was:
add_test(NAME ${BRAINSFitTestName}
COMMAND ${LAUNCH_EXE}
$<TARGET_FILE:BRAINSFitTest>
--compare
${BRAINSFitTestName}.result.nii.gz
${BRAINSFit_BINARY_DIR}/
Testing/${BRAINSFitTestName}.test.nii.gz
--compareIntensityTolerance 7
--compareRadiusTolerance 0
--compareNumberOfPixelsTolerance 777
BRAINSFitTest
--costMetric MMI
--failureExitCode -1
--writeTransformOnFailure
11
--numberOfIterations 2500
--numberOfHistogramBins 200
--numberOfSamples 131072
--translationScale 250
--minimumStepLength 0.001
--outputVolumePixelType uchar
--transformType Affine
--initialTransform
BRAINSFitTest_Initializer_RigidRotationNoMasks.mat
--maskProcessingMode ROI
--fixedVolume test.nii.gz
--fixedBinaryVolume test.mask
--movingVolume rotation.test.nii.gz
--movingBinaryVolume rotation.test.mask
--outputVolume ${BRAINSFit_BINARY_DIR}/
Testing/${BRAINSFitTestName}.test.nii.gz
--outputTransform ${BRAINSFit_BINARY_DIR}/
Testing/${BRAINSFitTestName}.mat
--debugLevel 50
)
After moving the les to MIDAS and replacing them with
their key les, the macro looks like this:
midas_add_test(NAME ${BRAINSFitTestName}
COMMAND ${LAUNCH_EXE}
$<TARGET_FILE:BRAINSFitTest>
--compare
MIDAS{${BRAINSFitTestName}.result.nii.gz.md5}
${BRAINSFit_BINARY_DIR}/
Testing/${BRAINSFitTestName}.test.nii.gz
--compareIntensityTolerance 7
--compareRadiusTolerance 0
--compareNumberOfPixelsTolerance 777
BRAINSFitTest
--costMetric MMI
--failureExitCode -1
--writeTransformOnFailure
--numberOfIterations 2500
--numberOfHistogramBins 200
--numberOfSamples 131072
--translationScale 250
--minimumStepLength 0.001
--outputVolumePixelType uchar
--transformType Affine
--initialTransform MIDAS{
BRAINSFitTest_Initializer_RigidRotationNoMasks.mat.md5}
--maskProcessingMode ROI
--fixedVolume
MIDAS{test.nii.gz.md5}
--fixedBinaryVolume
MIDAS{test.mask.md5}
--movingVolume
MIDAS{rotation.test.nii.gz.md5}
--movingBinaryVolume
MIDAS{rotation.test.mask.md5}
--outputVolume ${BRAINSFit_BINARY_DIR}/
Testing/${BRAINSFitTestName}.test.nii.gz
--outputTransform ${BRAINSFit_BINARY_DIR}/
Testing/${BRAINSFitTestName}.mat
--debugLevel 50
)
The references to binary les in the source directory have
been changed to refer to the key le instead, and wrapped
with the MIDAS{...} keyword to let the macro know that
the les need to be downloaded. When you congure the
project, calling the midas_add_test macro actually creates
two tests. The rst of these is the fetchData test, which per-
forms the download of all the data required by the actual
test, which is then added by the macro. The actual test
is made to explicitly depend on the fetchData test, which
makes this macro safe for use in parallel-CTest environments.
Another use case is for tests that pass a directory as an
argument instead of a single le. This is the case in Slicer4's
DicomToNrrdConverter module, which tests against many
DICOM directories containing a large number of binary les.
There is an additional signature for this use case: MIDAS_
DIRECTORY{...}. Pass in the name of a directory that contains
multiple key les. All of the key les will be replaced by
the corresponding actual les at test time and the directory
where they were downloaded will be passed to the test as
an argument.
NETWORK CONNECTIVITY
If you want to download all of the required testing data in
anticipation of losing your network connectivity, run CMake
on your project to congure the test set, and then in the
build directory, run the following command:
ctest -R _fetchData
This will fetch all of the data needed for the tests. The data
only needs to be downloaded once; subsequent calls to run
the tests will reference the data that was previously down-
loaded to your machine, so no further network connectivity
is required.
CONCLUSION
The midas_add_test macro is designed so that test develop-
ers will have an easy time converting their existing tests and
managing the synchronization of data between the MIDAS
server and their source repositories. Those running the tests
will not have to do anything different except to ensure the
data is downloaded once they have network connectivity.
Full documentation for this macro can be found at http://
www.kitware.com/midaswiki/index.php/MIDAS%2BCTest
Zach Mullen has been a R&D Engineer at
Kitware since 2009. He works on several of
Kitware’s software process tools, including
CMake, CTest, CDash, and MIDAS.
KITWARE NEWS
SPIE MEDICAL IMAGING
The 2011 SPIE Medical Imaging conference was held in
Orlando from February 12-18. Kitware, once again, had a
strong showing.
Dr. Andinet Enquobahrie was co-host of the IGSTK Users
Group Meeting that was held in conjunction with the con-
ference. Andinet presented on PET-CT support in IGSTK and
users presented a diverse set of ongoing applications, clinical
trials, and research algorithms.
Dr. Michel Audette was lead author on a paper involving
ParaView, Medical Imaging, and Surgical Simulation in
partnership with JHU and UNC – "Approach-specific multi-
grid anatomical modeling for neurosurgery simulation with
Paraview.” The other authors are Denis Rivière, NeuroSpin
(France); Charles Law, Luis Ibanez, Stephen R. Aylward, Julien
Finet, Kitware, Inc. (USA); Xunlei Wu, Duke University (USA);
and Matthew Ewend, The University of North Carolina at
Chapel Hill (USA).
Lastly, Dr. Stephen Aylward co-organized the third Live
Demonstration Workshop that is held in conjunction with
12
the Computer-Aided Diagnosis track. The year’s workshop
(http://www.kitware.com/workshops/SPIE_CAD_2011.html)
featured 15 outstanding demonstrations that spanned a
variety of organs (heart, lung, colon, breast), modalities
(mammography, ultrasound, acoustic, MR), and diseases
(cancer, COPD, mental disorders). Over 120 people attended
these demonstrations, which will be repeated next year.
IGSTK USER GROUP MEETING
The fth IGSTK Users Group meeting was held in Orlando,
Florida in conjunction with the SPIE Medical Imaging
conference. Each year the meeting is an opportunity for col-
laborators and users to gather and discuss their projects and
use of IGSTK.
In addition to Georgetown University and Kitware, there
were six participating institutions
1. Princess Margaret Hospital Toronto, Ontario
2. THE PERK LAB, Queen's University, Kingston, Ontario
3. Humanoids and Intelligence Systems Lab, Karlsruhe
Institut of Technology (KIT) Karlsruhe, Germany
4. Innovation Center Computer Assisted Surgery (ICCAS) ,
University of Leipzig Leipzig, Germany
5. 4D Visualization Laboratory, Innsbruck Medical University
Innsbruck, Austria
6. Hospital Italiano de Buenos Aires, Buenos Aires, Argentina
The morning of the meeting was dedicated to a series of
presentations by core developers on new developments that
were released as part of IGSTK 4.4. In the afternoon, IGSTK
users presented their applications. This year the presenta-
tions covered image-guided head and neck surgery systems,
a facet joint injection training simulator, and neurosurgery.
Procedure performed during a clincal trial of an image-guided
neurosurgical navigation system in Buenos Aires, Argentina
One of the highlights of the meeting was a presentation
by Sebastian Ordas, who owns a small company in Buenos
Aires, Argentina. He has a proprietary neurological image-
guided navigation system developed using IGSTK. The
system is currently under clinical trial and has been tested
in 16 clinical cases thus far, including big and supercial
tumor resection, tumor drainage, deep and small tumor
resection, tumor biopsy and epilepsy. All the presentations
were well done and the slides can be found on the IGSTK wiki at
http://public.kitware.com/IGSTKWIKI/.
CTK HACKFEST
Kitware co-hosted the third annual CTK Hackfest from
February 7-11 at the Franklin Hotel in Chapel Hill, NC. In
preceding years, the event was held in Washington, DC
and Barcelona, Spain, with a pre-Hackfest in Heidelberg,
Germany. CTK, the Common Toolkit, provides a unied set
of basic programming constructs that are useful for medical
imaging applications development and facilitates the
exchange and combination of code and data. The goal of
the hackfest is to spend several days focusing on the most
pressing challenges and collaborate to improve the toolkit.
This year there were returning participants from leading
research groups around the world, including the teams of
MITK (Heidelberg,Germany), MAF (Bolona, Itality), DreamTk
(INRIA, France); Steve Pieper (Isomics, Boston) and Kitware
attendees Dave Partyka, Jean-Christophe Fillion-Robin,
Julien Finet and Stephen Aylward. In addition, this hackfest
welcomed new participants including Lawrence Tarbox of the
University of Washington St. Louis, an active participant in
the DICOM Standards Committee and Nicholas Herlambang
from AZE, Japan.
During this event, signicant work was done on CTK, with a
large emphasis on a DICOM PACS query/retrieve application
(with reusable Qt widgets/database); a VTK-free Qt simple
image viewer (optionally supporting DICOM image dataset
with Dcmtk dependency); data transfer and application
hosting (supporting DICOM Part 19 protocols) and xmlrpc
event bus. A new development is that CTK, and inherently
Slicer, will soon be using the Kitware Qt GUI testing frame-
work for automatic GUI testing.
The CTK hackfest was funded, in part, by:
The Neuroimage Analysis Center (NAC, PI: Kikinis) P41 RR
013218
The National Alliance for Medical Image Computing (NA-
MIC, PI: Kikinis) 1U54EB005149-01
Image Registration for Ultrasound-Based Neurosurgical
Navigation Project (PI: Aylward, Wells) 1R01CA138419-01
VIZBI 2011 WORKSHOP
The VIZBI 2011 workshop was held March 16-18, 2011 at
the Broad Institute in Cambridge, MA. The workshop was a
review of the state-of-the art in biological visualization and
13
highlighted current and future challenges in visualization
across the broad range of biological research areas. VIZBI
takes a unique approach to the science and dissemination of
biological information. It brings together biologists, geneti-
cists, computer scientists and artists in an exploration of
different ways to analyze and present unique information.
The Broad Institute is uniquely suited to hosting this type of
intimate conference.
This was the second year of the workshop, which featured
four keynote addresses, six scientic sessions and four poster
sessions spanning a range of topics, from the genome, to
proteins, cells, anatomy and on to populations and evolu-
tion. Along the way the workshop addressed outreach,
teaching and general visualization principles. Following
the workshop, the organizers held a set of tutorial sessions
that combined instruction in various visualization tools with
hands-on experience using those tools on the students'
own data. Kitware was invited to present a tutorial, and
Wesley Turner presented on the use of VTK and ParaView for
medical and biological visualization.
IS&T/SPIE ELECTRONIC IMAGING CONFERENCE
The IS&T/SPIE Electronic Imaging conference was held in San
Francisco from January 23–27 this year. This conference is
the must-attend event for all aspects of electronic imaging,
including imaging systems, image processing, image quality,
and algorithms. It featured multiple, parallel tracks on
3D Imaging, Interaction and Measurement; Imaging,
Visualization, and Perception; Image Processing; Digital
Imaging Sensors and Applications; Multimedia Processing
and Applications; and Visual Information Processing and
Communication. This year Kitware joined forces with Portola
Pharmaceuticals to present a poster and live demonstration
on “Tracking ow of leukocytes in blood for drug analysis,”
(authors A. Basharat, W. D. Turner, G. Stephens, B. Badillo, R.
Lumpkin, P. Andre, and A. Perera). The work was presented
by Arslan Basharat and Wesley Turner of Kitware.
KITWARE AWARDED NASA GRANT
It was announced in March that NASA awarded SBIR funding
to Kitware to further develop ParaView to meet the needs
of ultrascale visualization. The software will address critical
issues in order to enable real-time investigation of extremely
large datasets using massive distributed memory architec-
tures with up to 100,000 cores.
We will be collaborating with California-based SciberQuest,
leaders in kinetic modeling of space plasmas, to complete
the project. This partnership enables us to work with real
world data from petascale simulations directly relevant to
NASA’s missions and scientic goals, and allows the develop-
ment to be guided by the ultimate users of the software.
In this investigative phase we will identify scaling bottlenecks
in ParaView, which is currently used by NASA to explore the
results of trillion element particle simulations on the Pleiades
supercomputer. As the number of processors scales up past
ten thousand, we anticipate that the most critical issues will
be data IO, architectural overhead, and the compositing of
the partial results. While the Phase I effort of this project is
limited to developing prototypes and only select improve-
ments will be incorporated into the software, if the Phase
II effort is funded the complete range of improvements will
be merged into ParaView and the underlying Visualization
Toolkit (VTK), which will benet tens of thousands of
researchers world-wide.
NA-MIC PROJECT WEEK
The NA-MIC community met for their 7th annual All Hands
Meeting and External Advisory Board meeting January 7-10
in Salt Lake City, Utah. With the recent renewal of the NIH
National Center of Biomedical Computing NA-MIC grant,
this AHM/EAB meeting marked the beginning of the next
four years of what will ultimately be a 10-year project.
Well over 100 people were in attendance, including science
ofcers and other NIH ofcials. One common theme to this
meeting was the impact Slicer was having throughtout the
world. That impact is well illustrated by the gure below - it
shows the locations to which Slicer has been downloaded.
Kitware was present in force and made signicant con-
tributions to the meeting and Slicer. Stephen Aylward,
Jean-Christophe Fillion-Robin, Julien Finet, Danielle Pace,
Dave Partyka, Zach Mullen and Will Schroeder were in
attendance. Some of the areas where Kitware was focus-
ing its efforts included the preview release of Slicer 4 (a
Qt-based rewrite of Slicer spearheaded by Julien Finet and
Jean-Christophe Fillion-Robin); the beta release of TubeTK
(an adjunct toolkit that provides sliding-organ registration
and vascular analysis capabilities, by Danielle and Stephen);
a 64-bit Slicer Windows build by Dave Partyka and Steve
Pieper; DICOM/MIDAS data integration by Zach Mullen;
and two new reformat widgets being developed by Will
Schroeder. We are looking forward to four more full and
rewarding years with NA-MIC, and many more years with
Slicer beyond those.
VTK SELECTED FOR GOOGLE SUMMER OF CODE
The Visualization Toolkit (VTK) has been accepted for the
2011 Google Summer of Code, with Kitware acting as the
mentoring organization. This program encourages student
participation in open source communities through three-
month paid development projects. Students interested in the
program apply to work on a specic project and work with a
mentor at the organization over the course of the summer.
This global program gives students the opportunity to work
on real-world software projects and provides mentoring
organizations with potential new developers. Additionally,
since all development is open source, the projects grow and
code is contributed back to the community.
Of 417 applications this year, Google selected 175 open
source projects for participation, 50 of which are new to the
program. Google has posted a list of all accepted projects.
Kitware has several project ideas for students, such as the
development of new 2D charts, chemistry visualization,
volume rendering in WebGL, AMR volume rendering and
Apple iOS support for ParaViewWeb.
14
TIBBETTS AWARD
Kitware received a Tibbetts Award for its Software Toolkit
for Image-Guided Surgery (IGSTK) Phase I and II STTRS. The
award, which recognizes companies who represent excel-
lence in achieving the mission and goals of the SBIR and
STTR programs, is named for Roland Tibbetts. Tibbetts is con-
sidered the father of the Small Business Innovation Research
(SBIR) program, which he began as an experimental project
at the National Science Foundation in the early 1980's.
IGSTK, our tool for research involving minimally invasive
image-guided medical procedures, is being used in investiga-
tions into new surgical techniques that may improve surgical
accuracy and precision, increase a surgeon’s ability to con-
dently treat challenging and complex pathologies, decrease
surgical trauma, and reduce recovery time for patients.
The IGSTK STTR is a collaborative research project between
Kitware’s New York and North Carolina ofces and the
Computer Aided Interventional and Medical Robotics
(CAIMR) group, which is led by Dr. Kevin Cleary at
Georgetown University.
Dr. Andinet Enquobahrie, with Roland Tibbetts, the father
of the SBIR program, accepted the award at a ceremony in
Washington, DC on behalf of Kitware and the Image Guided
Surgery Toolkit team.
NA-MIC REGISTRATION RETREAT
From February 19-23, Danielle Pace, Will Schroeder, and
Stephen Aylward attended a "Registration Retreat" in San
Juan, Puerto Rico. This event brings together registration
algorithm researchers from the Neuroimage Analysis Center,
NA-MIC, the National Center for Image Guided Therapy, and
other invited researchers.
The meeting consisted of a daily mix of 2-4 hours of group
discussions followed by break-out meetings. There were
13 participants, including Tina Kapur, Brian Avants (an
ITK collaborator), Guido Gerig (University of Utah), Kilian
Pohl (University of Pennsylvania), Torsten Rohlng (SRI
International), William Wells (MIT), Matthew Toews (McGill
University), Gregory Sharp (MGH), and C-F Westin (Harvard).
The intended product of the meeting is a journal article that
presents the unmet challenges and opportunities in the clini-
cal application of registration algorithms. The group made
good progress on the paper, which will be led by Tina Kapur
and Stephen Aylward.
KITWARE WINS IARPA GRANT
Kitware was awarded a $2 million, one-year grant by the
Intelligence Advanced Research Projects Activity (IARPA)
to develop a system prototype called General Engine for
Indexing Events (GENIE) to address the ALADDIN challenge.
GENIE’s primary purpose is to enhance the capabilities of
Automated Low-Level Analysis and Description of Diverse
Intelligence Video (ALADDIN), an IARPA program that
focuses on nding activities in unconstrained or “open-
source” video collections, like YouTube, by leveraging the
most promising, relevant technologies to continuously
search through millions of new and archived videos on the
web to detect the few that contain meaningful, operational,
and salient information. It will also serve as a powerful plat-
form for conducting novel research in event recognition and
search by providing software tools that simplify quantied
evaluation of research algorithms.
The idea behind this program is to recognize specic activities
based on evidential descriptors contained within the video
such as location, objects and activities being performed (i.e.,
making a cake, hitting a baseball or constructing a shelter).
The GENIE solution requires a strong capability in multime-
dia content description and event modeling, as well as the
ability to architect a scalable, end-to-end solution. These
characteristics, beyond ALADDIN’s current state-of-the-
art video recognition technology system, are necessary to
overcome the key challenge in web video recognition: the
depiction of virtually any event and object in a limitless
number of styles, qualities and scenes.
“By enhancing ALADDIN’s capabilities, GENIE will have a
revolutionary impact on the automated analysis of web-
based videos and, undoubtedly, contribute to the creation
of additional military and domestic applications,” said Lynn
Bardsley, Kitware’s program manager for computer vision.
“This award is a testament to Kitware’s progress in devel-
oping unique computer vision and intelligence software
solutions.”
Dr. Amitha Perera, a Technical Leader at Kitware and the
project leader on GENIE, has assembled a world-class team
of researchers and partners for GENIE Phase I including,
Honeywell Labs, a leading defense technology company and
market leader in commercial video surveillance systems and
large-scale video indexing. Kitware has previously worked
with Honeywell Labs on the DARPA VIRAT and PerSEAS pro-
grams. Additional academic collaborators include Stanford
University, Georgia Tech, the University at Buffalo and Simon
Fraser University.
KITWARE PUBLICATIONS UPDATE
The printing of the VTK Textbook is complete and it is again
available for purchase through the Kitware online store.
Additionally, we have added new ways to provide feedback
on our books through our website via a bug tracker or email.
We encourage our readers to visit the website and send us
any comments or suggestions so that we can improve the
next editions of each of our books.
NEW ONLINE HOME FOR THE SOURCE
Based on reader feedback, we have created a new digital
version of the Source. This new digital adaptation is in the
style of a blog, and readers will now be able to post com-
ments on the articles and subscribe to specic topics and
15
authors through an RSS feed. We encourage our readers to
subscribe in order to take full advantage of the new, inter-
active features. To register, please visit www.kitware.com/
source. If you have any feedback on the new digital version,
please email us at editor@kitware.com.
KITWARE COURSES IN EUROPE
This past February the team from Kitware’s ofce in Lyon,
France taught its rst public course in Europe. The three-day
course, by Julien Jomier and Charles Marion, focused on ITK,
VTK, ParaView and Kitware’s set of build and testing tools,
CMake, CTest and CDash. The course was well attended and
lled to capacity. It was attended by an international group
of students representing Brazil, Germany, Ireland and France.
Students at Kitware's course taught in Lyon, France
The success of this rst course has prompted the Kitware
team to teach an advanced, two-day VTK course to be held
April 7-8 in Lyon. This course will cover an overview of the
VTK architecture, VTK visualization pipelines, information
visualization techniques, writing custom lters and parallel
processing and rendering techniques.
If you are interested in one of our courses, please visit our
website to see our full range of offerings.
KITWARE CELEBRATES ITS ANNIVERSARY
This March Kitware celebrated its 13th anniversary. The
company has grown signicantly since 1998, from ve co-
founders to nearly 100 employees, with ofces in three cities
on two continents. There are several factors that I believe
have enabled our success, which I would like to share.
First, at Kitware we have always been fortunate in that we
have an attitude of service, doing what each of us does best
and helping each other. We have been able to take the best
in each of us to create something better than any one of
us could do on our own. We also had great early custom-
ers to fuel our growth, including Terry Yoo at the National
Library of Medicine who established the ITK project in 1999,
Jim Ahrens at Los Alamos who pushed for parallelizing VTK
for use on supercomputers and who started the ParaView
project, Brian Wylie at Sandia who supported and promoted
ParaView, and a collection of customers ranging from oil
and gas to computing hardware manufacturers to research
collaborators in academia and various research labs. I also
think we are riding a profound wave of innovation and
collaboration due to the emerging open source movement,
and have been able to ride this wave to create technically
excellent products and very effective collaborations. At
Kitware, we have also created a synergistic and powerful
technology portfolio that enables us to create solutions for
customers that few other companies can deliver. Finally, we
have developed outstanding business processes that provide
a foundation of stability in what is a very dynamic business
environment.
However, while all of these are unquestionably vital to our
past and future success, there is one element of our success
that I have come to fully understand, and that is the caliber
of our people, which I would like to explain by way of a
discussion of workplace diversity. I am sure that many of you
have heard business leaders talking about the importance
of diversity in the workplace. Early on I thought "diver-
sity" simply referred to the mix of genders, ages, cultural
backgrounds and so on that creates a stimulating work envi-
ronment. While these factors are certainly important, they
are only the tip of the diversity iceberg: what makes this
company great is the tremendous range of skills, abilities,
talents and potential that we collectively represent.
In the coming years we will continue the practices that have
enabled our success: service to our co-workers and collabora-
tors; nding supportive customers; growing our leading-edge
technology portfolio; extending our open source outreach;
and rening our business processes. Moreover we will con-
tinue to hire some of the best talent we can nd, and we will
do all we can to unleash the diverse potential of the people
at Kitware.
-Will Schroeder, President and CEO
UPCOMING CONFERENCES AND EVENTS
2011 Northeast Bioengineering Conference
April 1-3 at Rensselaer Polytechnic Institute in Rensselaer, NY.
Rick Avila will be giving a keynote speech and Will Schroeder
will be participating in the CEO Forum discussion.
OpenFOAM Workshop
June 13-16 at Penn State University in State College, PA.
Dave DeMarle will be teaching an advanced ParaView tuto-
rial on June 13.
NA-MIC Summer Project Week
June 20-24 at MIT in Cambridge, MA. This project week
focuses on hands-on R&D for applications in image-guided
therapy and other areas of biomedical research. Will
Schroeder will be attending.
IEEE Computer Vision Pattern Recognition (CVPR)
June 21-25 in Colorado Springs, CO. Anthony Hoogs is co-
organizing two workshops, the Workshop on the Activity
Recognition Competition and the Workshop on Camera
Networks and Wide Area Scene Analysis. Luis Ibáñez, Matt
Leotta, Amitha Perera and Patrick Reynolds will be teach-
ing the tutorial "ITK meets OpenCV: A New Open Source
Software Resource for CV" on June 20th.
SIGGRAPH 2011
August 7-11 in Vancouver, BC, Canada. SIGGRAPH is the pre-
miere international event focused on computer graphics and
interactive techniques. Aashish Chaudhary will be attending.
If you would like to set up a time to meet with us at any
of these events, please contact our ofce by phone at (518)
371-3971 or email at kitware@kitware.com.
16
Kitware’s Software Developer’s Quarterly is published by
Kitware, Inc., Clifton Park, New York.
Contributors: Lisa Avila, Stephen Aylward, Christian Boucheny,
David Cole, Katie Cronen, Andinet Enquobahrie, Julien Finet,
Marcus Hanwell, Luis Ibáñez, Julien Jomier, Zach Mullen, Patrick
Reynolds, Alejandro Ribés, Will Schroeder, and Wes Turner.
Graphic Design: Steve Jordan
Editor: Katie Osterdahl
To contribute to Kitware’s open-source dialogue in future
editions, or for more information on contributing to specic
projects, please contact the editor at editor@kitware.com.
This work is licensed under a Creative Commons Attribution-
NoDerivs 3.0 Unported License.
Kitware, ParaView, and VolView are registered trademarks
of Kitware, Inc. All other trademarks are property of their
respective owners.
In addition to providing readers with updates on Kitware
product development and news pertinent to the open source
community, the Kitware Source delivers basic information
on recent releases, upcoming changes and detailed technical
articles related to Kitware’s open-source projects, including:
• TheVisualizationToolkit(www.vtk.org)
• TheInsightSegmentationandRegistrationToolkit(www.itk.org)
• ParaView(www.paraview.org)
• TheImageGuidedSurgeryToolkit(www.igstk.org)
• CMake(www.cmake.org)
• CDash(www.cdash.org)
• MIDAS(www.kitware.com/midas)
• BatchMake(www.batchmake.org)
Kitware would like to encourage our active developer
community to contribute to the Source. Contributions may
include a technical article describing an enhancement you’ve
made to a Kitware open-source project or successes/lessons
learned via developing a product built upon one or more
of Kitware’s open-source projects. Authors of any accepted
article will receive a free, ve volume set of Kitware books.
NEW EMPLOYEES
Zak Ford
Zak joined Kitware in February as a new systems administra-
tor. Prior to joining, he worked as a systems administrator
and web application developer for Gui Productions, Inc. Zak
studied Computer Science at the University at Albany.
Tami Grasso
Tami joined Kitware in April as the new ofce assistant. Prior
to joining Kitware, Tami worked in a customer service role
for Time Warner Cable and was a co-owner of Capitaland
Flooring Company, where she was responsible for main-
tianing company records, providing customer support and
developing marketing materials.
Michelle Kimmel
Michelle joined Kitware in February as an accountant on the
nance team. She received her B.S. in Accounting from the
University of Maryland. Prior to joining Kitware, Michelle
held accounting positions with Seton Health Systems, Inc.
and the Walden Golf Club, where she was responsible for
preparing monthly journal entries, managing account rec-
onciliations, processing accounts payable and assisting in
preparation of grant paperwork.
John Tourtellot
John joined Kitware in February as an R&D engineer
on the computer vision team. He received his B.S. cum
laude and M. Eng. degrees in electrical engineering from
Rensselaer Polytechnic Institute. Prior to coming to Kitware,
John worked as a senior software engineer at Simmetrix,
where he developed component software for simulation
based design.
George Zagaris
George joined Kitware in January as an R&D engineer for the
scientic computing group. He earned his B.S. (Honors) and
M.S. in computer science from the College of William and
Mary, where his M.S. research focused on parallel unstruc-
tured mesh generation for CFD applications and was partially
funded by the NASA Graduate Student Research Program
Fellowship at NASA Langley Research Center (LaRC).
KITWARE INTERNSHIPS
Kitware Internships provide current college students with
the opportunity to gain hands-on experience working with
leaders in their elds on cutting-edge problems. Our busi-
ness model is based on open source software—an exciting,
rewarding work environment.
At Kitware, you will assist in the development of founda-
tional research and leading-edge technology across our ve
business areas. We are actively recruiting interns for the
summer. If you are interested in applying, please send your
resume to internships@kitware.com.
EMPLOYMENT OPPORTUNITIES
Kitware is seeking talented, motivated and creative indi-
viduals to become part of our team. As one of the fastest
growing companies in the country, we have an immedi-
ate need for software developers, especially those with
experience in computer vision, scientic computing and
medical imaging.
At Kitware, you will work on cutting-edge research prob-
lems alongside experts in the elds of visualization, medical
imaging, computer vision, 3D data publishing and technical
software development. Our open source business model
means that your impact goes far beyond Kitware as you
become part of the worldwide communities surrounding
our projects.
Kitware employees are passionate and dedicated to inno-
vative open-source solutions. They enjoy a collaborative
work environment that empowers them to pursue new
opportunities and challenge the status quo with new ideas.
In addition to providing an excellent workplace, we offer
comprehensive benets including: exible hours; six weeks
paid time off; a computer hardware budget; 401(k); health,
vision, dental and life insurance; short- and long-term dis-
ability, visa processing; a generous compensation plan; prot
sharing; and free drinks and snacks.
Interested applicants should send a cover letter and resume
to jobs@kitware.com for their immediate consideration.
... The general advantages of point clouds discussed in Section 2.3 (i.e., simplicity and flexibility as no connectivity information has to be maintained) apply to NPR as well. This flexibility is exemplified by the ability to store arbitrary attributes at each point without influencing neighboring points, in contrast to meshes and volume data where [115], and a watercolor post-processing filter based on the approach of Bousseau et al. [11]. Note how individual objects are now clearly visible (e.g., the tree or pedestrians), the scene appears less cluttered and noisy, and the image looks aesthetically pleasing. ...
... Alternatively, the feature lines can be rendered explicitly as point clouds [71,107], line strips [148,163], or triangle strips [168] and blended with the result of the point cloud rendering. For image-space approaches, typically either depth [27,100,115,138], normal [62], or curvature buffers [118] are utilized for determining the pixels that correspond to feature lines. These pixels are then assigned a certain color, e.g., black. ...
Article
Full-text available
Point clouds are widely used as a versatile representation of 3D entities and scenes for all scale domains and in a variety of application areas, serving as a fundamental data category to directly convey spatial features. However, due to point sparsity, lack of structure, irregular distribution, and acquisition-related inaccuracies, results of point cloud visualization are often subject to visual complexity and ambiguity. In this regard, non-photorealistic rendering can improve visual communication by reducing the cognitive effort required to understand an image or scene and by directing attention to important features. In the last 20 years, this has been demonstrated by various non-photorealistic rendering approaches that were proposed to target point clouds specifically. However, they do not use a common language or structure for assessment which complicates comparison and selection. Further, recent developments regarding point cloud characteristics and processing, such as massive data size or web-based rendering are rarely considered. To address these issues, we present a survey on non-photorealistic rendering approaches for point cloud visualization, providing an overview of the current state of research. We derive a structure for the assessment of approaches, proposing seven primary dimensions for the categorization regarding intended goals, data requirements, used techniques, and mode of operation. We then systematically assess corresponding approaches and utilize this classification to identify trends and research gaps, motivating future research in the development of effective non-photorealistic point cloud rendering methods.
... In theory, any IB-AR technique could be employed during postprocessing. To date, some NST approaches have been adapted to point clouds (Cao et al., 2020), and eye-dome lighting has been developed as a technique to enhance object perception in point cloud renderings (Ribes and Boucheny, 2011). However, generic approaches for integrating point cloud rendering with arbitrary IB-AR methods have rarely been investigated. ...
Conference Paper
Full-text available
3D point clouds are a widely used representation for surfaces and object geometries. However, their visualization can be challenging due to point sparsity and acquisition inaccuracies, leading to visual complexity and ambiguity. Non-photorealistic rendering (NPR) addresses these challenges by using stylization techniques to abstract from certain details or emphasize specific areas of a scene. Although NPR effectively reduces visual complexity, existing approaches often apply uniform styles across entire point clouds, leading to a loss of detail or saliency in certain areas. To address this, we present a novel segment-based NPR approach for point cloud visualization. Utilizing prior point cloud segmentation, our method applies distinct rendering styles to different segments, enhancing scene understanding and directing the viewer’s attention. Our emphasis lies in integrating aesthetic and expressive elements through image-based artistic rendering, such as watercolor or cartoon filtering. To combine the per-segment images into a consistent final image, we propose a user-controllable depth inpainting algorithm. This algorithm estimates depth values for pixels that lacked depth information during point cloud rendering but received coloration during image-based stylization. Our approach supports real-time rendering of large point clouds, allowing users to interactively explore various artistic styles.
... Our Web-GL platform can also be used to obtain measurements easily, produce enhanced documentation, such as ortho maps, vector drawings, and segmented 3D point clouds, allowing for better preservation and protection planning. In future studies, scholars can leverage Potree shading techniques, such as the Eye-dome non-photorealistic lighting model [98], to visually analyze our datasets and reveal hidden or otherwise unobtainable knowledge, such as the location of Chinatown buildings that were burned in the 1982 fire. However, the system has some limitations for the effective re-use of data. ...
Article
Full-text available
In the American West, wildfires and earthquakes are increasingly threatening the archaeological, historical, and tribal resources that define the collective identity and connection with the past for millions of Americans. The loss of said resources diminishes societal understanding of the role cultural heritage plays in shaping our present and future. This paper examines the viability of employing stationary and SLAM-based terrestrial laser scanning, close-range photogrammetry, automated surface change detection, GIS, and WebGL visualization techniques to enhance the preservation of cultural resources in California. Our datafication approach combines multi-temporal remote sensing monitoring of historic features with legacy data and collaborative visualization to document and evaluate how environmental threats affect built heritage. We tested our methodology in response to recent environmental threats from wildfire and earthquakes at Bodie, an iconic Gold Rush-era boom town located on the California and Nevada border. Our multi-scale results show that the proposed approach effectively integrates highly accurate 3D snapshots of Bodie’s historic buildings before/after disturbance, or post-restoration, with surface change detection and online collaborative visualization of 3D geospatial data to monitor and preserve important cultural resources at the site. This study concludes that the proposed workflow enhances the monitoring of at-risk California’s cultural heritage and makes a call to action to employ remote sensing as a pathway to advanced planning.
... Attribute Menu: Level of Detail. The data Level of Detail Attribute Menu: Composite. The option combines the different attributes.L) Appearance: Eye-Dome-LightingDefn: "Eye-Dome Lighting (EDL) is a non-photorealistic, image-based shading technique designed to improve depth perception in scientific visualization images."(Boucheny and Ribes, 2011) Click the Enable Tick Box and adjust the Radius and Strength of the shading effect. Radius increases the thickness of the point outline. Strength increases the strength of the shading which results in a darker image(Schuetz, 2016).Choose a preferred background that suits the user's requirements. The example below depicts a cloudy sky re ...
Technical Report
Full-text available
3D Pointcloud Worlds as emanating from the Maltese SIntegraM project.
... The principle of this shader is simple, it consists in creating an effect like the Eye-Dome lighting (Boucheny and Ribes, 2017) by representing each point in the form of a square or circle, then assigning a black color to a corner (bottom left). In our case of square or circle rendering, it results as presented in figure 6: ...
Article
Full-text available
With the increasing volume of 3D applications using immersive technologies such as virtual, augmented and mixed reality, it is very interesting to create better ways to integrate unstructured 3D data such as point clouds as a source of data. Indeed, this can lead to an efficient workflow from 3D capture to 3D immersive environment creation without the need to derive 3D model, and lengthy optimization pipelines. In this paper, the main focus is on the direct classification and integration of massive 3D point clouds in a virtual reality (VR) environment. The emphasis is put on leveraging open-source frameworks for an easy replication of the findings. First, we develop a semi-automatic segmentation approach to provide semantic descriptors (mainly classes) to groups of points. We then build an octree data structure leveraged through out-of-core algorithms to load in real time and continuously only the points that are in the VR user's field of view. Then, we provide an open-source solution using Unity with a user interface for VR point cloud interaction and visualisation. Finally, we provide a full semantic VR data integration enhanced through developed shaders for future spatio-semantic queries. We tested our approach on several datasets of which a point cloud composed of 2.3 billion points, representing the heritage site of the castle of Jehay (Belgium). The results underline the efficiency and performance of the solution for visualizing classifieds massive point clouds in virtual environments with more than 100 frame per second.
Article
Full-text available
LiDAR devices are capable of acquiring clouds of 3D points reflecting any object around them, and adding additional attributes to each point such as color, position, time, etc. LiDAR datasets are usually large, and compressed data formats (e.g. LAZ) have been proposed over the years. These formats are capable of transparently decompressing portions of the data, but they are not focused on solving general queries over the data. In contrast to that traditional approach, a new recent research line focuses on designing data structures that combine compression and indexation, allowing directly querying the compressed data. Compression is used to fit the data structure in main memory all the time, thus getting rid of disk accesses, and indexation is used to query the compressed data as fast as querying the uncompressed data. In this paper, we present the first data structure capable of losslessly compressing point clouds that have attributes and jointly indexing all three dimensions of space and attribute values. Our method is able to run range queries and attribute queries up to 100 times faster than previous methods.
Article
Point cloud data (PCD) have attracted attention in many disciplines, including civil engineering. However, big PCD have posed great challenges for conventional approaches using a single computer. Although many published studies have demonstrated distributed computing's potential for large-scale data-intensive applications, this technology has not been applied widely in processing of big PCD due to a lack of methods for data management, visualization, and analysis. To strengthen the foundation of distributed computation in civil engineering, this study offers a solution to one of the obstacles presented in the previous studies, which was the visualization of big PCD. The practical result of this study is the introduction of B-EagleV, a cost-effective Hadoop-based solution for the visualization of big PCD in civil engineering with almost complete components of scalable storage, high-performance rendering, and interactive visualization. Through experiment results and demonstration, B-EagleV showed great promise for data management, progress monitoring, and survey conduction in the construction sector.
Conference Paper
Full-text available
In this paper, the next generation of model-based visualization is introduced, the DLR Visualization 2 Library. This new real-time graphics environment for Modelica is equipped with a state of the art engine for physics based lighting calculation and high-definition render quality, simultaneous visualization of parallel running simulation models, new features like a modern streaming interface and a new, cleaner library structure. It enables the user to create graphical real-time environments for virtual commissioning of complex systems of systems and imaging based sensors. Some applications, as for example depthcamera data generation or rendering of point clouds or vectorized flow visualization are demonstrated in the use cases section of this paper.
Article
Full-text available
Uno de los problemas históricos más importantes y arraigados de Chiapas es el hambre, donde 25% de la población sufre carencia alimentaria. El presente trabajo tiene el objetivo de analizar la distribución espacial de ésta en el estado a nivel municipal durante el 2015 a partir de la implementación de dos modelos gaussianos latentes, el primero busca identificar patrones espaciales no aleatorios y el segundo intenta analizar el efecto que ejercen las covariables de desigualdad socioeconómica y analfabetismo (como una variable que aproxima el nivel de acceso a la educación) sobre los niveles de carencia alimentaria municipal. Los resultados parecen confirmar la presencia de un patrón espacial de concentración de carencias alimentarias, sin embargo, no fue posible probar el impacto de las mencionadas covariables.
Article
Full-text available
Our technique enhances the appearance of photographs shot in dark environments by combining a picture taken with the available light and one taken with the flash. We preserve the ambiance of the original lighting and insert the sharpness and more reliable color information from the flash image. We use the bilateral filter to decompose the two images into detail and large-scale layers. We reconstruct the image using the large scale of the available lighting and the detail of the flash. We detect and correct artifacts due to the flash shadow. Our output images provide the combined advantages of available illumination and flash photography. Copyright ACM, 2004. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of SIGGRAPH 2004 (ACM Transactions on Graphics (TOG)), archive Volume 23 , Issue 3 (August 2004)
Interactive scientific visualization of large datasets: towards a perceptive-based approach
  • C Boucheny
Boucheny, C. (2009). "Interactive scientific visualization of large datasets: towards a perceptive-based approach" PhD, Université Joseph Fourier (in French only).
Editor: Katie Osterdahl To contribute to Kitware's open-source dialogue in future editions, or for more information on contributing to specific projects
  • Lisa Avila
  • Stephen Aylward
  • Christian Boucheny
  • David Cole
  • Katie Cronen
  • Andinet Enquobahrie
  • Julien Finet
  • Marcus Hanwell
  • Luis Ibáñez
  • Julien Jomier
  • Zach Mullen
  • Patrick Reynolds
Contributors: Lisa Avila, Stephen Aylward, Christian Boucheny, David Cole, Katie Cronen, Andinet Enquobahrie, Julien Finet, Marcus Hanwell, Luis Ibáñez, Julien Jomier, Zach Mullen, Patrick Reynolds, Alejandro Ribés, Will Schroeder, and Wes Turner. Graphic Design: Steve Jordan Editor: Katie Osterdahl To contribute to Kitware's open-source dialogue in future editions, or for more information on contributing to specific projects, please contact the editor at editor@kitware.com.