ArticlePDF Available

A three-dimensional registration method for automated fusion of micro PET-CT-SPECT whole-body images

Authors:

Abstract and Figures

Micro positron emission tomography (PET) and micro single-photon emission computed tomography (SPECT), used for imaging small animals, have become essential tools in developing new pharmaceuticals and can be used, among other things, to test new therapeutic approaches in animal models of human disease, as well as to image gene expression. These imaging techniques can be used noninvasively in both detection and quantification. However, functional images provide little information on the structure of tissues and organs, which makes the localization of lesions difficult. Image fusion techniques can be exploited to map the functional images to structural images, such as X-ray computed tomography (CT), to support target identification and to facilitate the interpretation of PET or SPECT studies. Furthermore, the mapping of two functional images of SPECT and PET on a structural CT image can be beneficial for those in vivo studies that require two biological processes to be monitored simultaneously. This paper proposes an automated method for registering PET, CT, and SPECT images for small animals. A calibration phantom and a holder were used to determine the relationship among three-dimensional fields of view of various modalities. The holder was arranged in fixed positions on the couches of the scanners, and the spatial transformation matrix between the modalities was held unchanged. As long as objects were scanned together with the holder, the predetermined matrix could register the acquired tomograms from different modalities, independently of the imaged objects. In this work, the PET scan was performed by Concorde's microPET R4 scanner, and the SPECT and CT data were obtained using the Gamma Medica's X-SPECT/CT system. Fusion studies on phantoms and animals have been successfully performed using this method. For microPET-CT fusion, the maximum registration errors were 0.21 mm +/- 0.14 mm, 0.26 mm +/- 0.14 mm, and 0.45 mm +/- 0.34 mm in the X (right-left), Y (upper lower), and Z (rostral-caudal) directions, respectively; for the microPET-SPECT fusion, they were 0.24 mm +/- 0.14 mm, 0.28 mm +/- 0.15 mm, and 0.54 mm +/- 0.35 mm in the X, Y, and Z directions, respectively. The results indicate that this simple method can be used in routine fusion studies.
Content may be subject to copyright.
886 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 24, NO. 7, JULY 2005
A Three-Dimensional Registration Method for
Automated Fusion of Micro PET-CT-SPECT
Whole-Body Images
Meei-Ling Jan, Member, IEEE, Keh-Shih Chuang*, Guo-Wei Chen, Yu-Ching Ni, Sharon Chen, Chih-Hsien Chang,
Jay Wu, Te-Wei Lee, and Ying-Kai Fu
Abstract—Micro positron emission tomography (PET) and
micro single-photon emission computed tomography (SPECT),
used for imaging small animals, have become essential tools in
developing new pharmaceuticals and can be used, among other
things, to test new therapeutic approaches in animal models
of human disease, as well as to image gene expression. These
imaging techniques can be used noninvasively in both detection
and quantification. However, functional images provide little
information on the structure of tissues and organs, which makes
the localization of lesions difficult. Image fusion techniques can
be exploited to map the functional images to structural images,
such as X-ray computed tomography (CT), to support target
identification and to facilitate the interpretation of PET or SPECT
studies. Furthermore, the mapping of two functional images of
SPECT and PET on a structural CT image can be beneficial for
those in vivo studies that require two biological processes to be
monitored simultaneously. This paper proposes an automated
method for registering PET, CT, and SPECT images for small
animals. A calibration phantom and a holder were used to deter-
mine the relationship among three-dimensional fields of view of
various modalities. The holder was arranged in fixed positions
on the couches of the scanners, and the spatial transformation
matrix between the modalities was held unchanged. As long as
objects were scanned together with the holder, the predetermined
matrix could register the acquired tomograms from different
modalities, independently of the imaged objects. In this work, the
PET scan was performed by Concorde’s microPET R4 scanner,
and the SPECT and CT data were obtained using the Gamma
Medica’s X-SPECT/CT system. Fusion studies on phantoms and
animals have been successfully performed using this method.
For microPET-CT fusion, the maximum registration errors were
0.21 mm
0.14 mm, 0.26 mm 0.14 mm, and 0.45 mm 0.34
mm in the X (right-left), Y (upper lower), and Z (rostral-caudal)
directions, respectively; for the microPET-SPECT fusion, they
Manuscript received December 15, 2004; revised March 4, 2005. This work
was supported in part by the National Science Council, Taiwan, R.O.C., under
Grant NSC93-2623-7-007-015-NU. The work of K.-S. Chuang was supported
by the Medical Research Advancement Foundation under Joint Grant VGHTH
87-26-3. The Associate Editor responsible for coordinating the review of this
paper and recommending its publication was R. Jaszczak. Asterisk indicates
corresponding author.
M.-L. Jan is with the Institute of Nuclear Energy Research, Longtan, Taiwan
32546, R.O.C. She is also with the Department of Nuclear Science, National
Tsing-Hua University, Hsinchu, Taiwan 30043, R.O.C. (e-mail: mljan@
iner.gov.tw).
*K.-S. Chuang is with the Department of Nuclear Science, National
Tsing-Hua University, Hsinchu, Taiwan 30013, R.O.C. (e-mail: kschuang@
mx.nthu.edu.tw).
G.-W. Chen, Y.-C. Ni, C.-H. Chang, T.-W. Lee, and Y.-K. Fu are with the
Institute of Nuclear Energy Research, LongTan, Taiwan 32546, R.O.C.
S. Chen and J. Wu are with the Department of Nuclear Science, National
Tsing-Hua University, Hsinchu, Taiwan 30013, R.O.C.
Digital Object Identifier 10.1109/TMI.2005.848617
were 0.24 mm
0.14 mm, 0.28 mm 0.15 mm, and 0.54 mm
0.35 mm in the X, Y, and Z directions, respectively. The results
indicate that this simple method can be used in routine fusion
studies.
Index Terms—Image fusion, micro computed tomography (CT),
micro positron emission tomography (PET), micro single-photon
emission computed tomography (SPECT), registration.
I. INTRODUCTION
N
ONINVASIVE technologies that image various aspects
of the disease process for clinical diagnosis are divided
into two types—structural and functional images. X-ray com-
puted tomography (CT) and magnetic resonance (MR) imaging
provide mainly high-resolution images with anatomical infor-
mation, whereas single-photon emission computed tomography
(SPECT) and positron emission tomography (PET) provide
functional information, but with coarser resolution. Fusing
these two types of images solves the problem of the insuffi-
ciency of the information provided by a single modality. In
recent years, the success of PET-CT imaging in the clinical field
has triggered substantial interest in noninvasive molecular and
anatomical imaging of small laboratory animals. The limited
spatial resolution and high specificity of imaging probes have
often enabled little morphologic information to be obtained
from micro PET or micro SPECT images, making molecular
investigations difficult to interpret [1]. Integrating anatomical
and functional tomographic datasets to improve both qualita-
tive detection and quantitative determination in small animal
molecular investigations is expected to provide much more
information.
The integration is achieved in two stages: the first is to
derive the geometric transformation between the two three-
dimensional (3-D) datasets (known as registration) and the
second is to fuse the images by generating a single image
from the two registered originals. The registration procedure
comprises three separate processes—identification, matching,
and verification. The identification process is the isolation of
features that can be directly compared. The matching process
compares the features of images from both modalities and
computes the optimal transformation between the two datasets.
Matching is formulated as the minimization of a cost function,
which measures the difference between the two sets of features
being identified. A final visual check must be performed for
confirmation.
0278-0062/$20.00 © 2005 IEEE
JAN et al.: AUTOMATED FUSION OF MICRO PET-CT-SPECT WHOLE-BODY IMAGES 887
Image registration must consider variations in an objects po-
sitioning among different scans. For instance, the actual orien-
tation and the section center, in relation to the objects anatomy,
are hard to keep the same for each scan. Moreover, the imaging
parameters such as section thickness, matrix size, pixel size,
and intersection gap are subject to the scanners limitations.
To register images from different modalities, geometric trans-
formation that incorporates translation, rotation, and scaling of
the datasets is suggested to be found. Various methods have
been proposed for nding the geometric transformation of dif-
ferent imaging modalities [2], including, for example, head-
holder alignment [3], external ducial alignment [4], stereotaxic
frames [5], [6], and internal landmark matching [7]. The rst
three methods use external markers attached to the objects skin
or face and cannot be used in a retrospective study if no markers
are applied in the experiment. The stereotaxic frames method is
the most accurate means of localization, but the invasiveness of
the frame limits its use. A more exible way to accomplish reg-
istration is to consider anatomical features visible in the tomo-
grams from each modality [8]. This method has the potential for
higher registration accuracy than the use of external markers and
involves less interference with normal radiographic procedures.
All of these methods have been developed for human studies.
Therefore, when applied to studies in animals, which are much
smaller than humans, the number of suitable features is lim-
ited, and the features are commonly difcult to locate. Little is
known about the performance in small animals of image regis-
tration algorithms that are effective in human studies [9]. Stout
et al. [10] employed small-bore Teon tubing lled with black
ink and F-18 as a marker and attached it to a wooden frame-
work to register the image. Although their goal was to generate
a stereotactic mouse atlas, their method can also be used in fu-
sion studies. Vaquero et al. [11] used external ducial markers to
evaluate the capability of the automated image registration and
mutual information algorithms to register PET images of the rat
skull and brain to CT or MR images of the same animal. They
concluded that the mutual information algorithm appears to be
a robust method for registering PET, CT, and MR images of the
rat head. However, the mutual information algorithm requires
manual prealignment and volume trimming before registration.
The aim of this was to develop an automatic method for regis-
tering small animal images. A calibration phantom and a holder
were designed to obtain the relationship among three-dimen-
sional elds of view of independent modalities. Animals were
secured to the holder, which wasarrangedatxed locations on the
couches of scanners to determine the spatial transformation ma-
trix between modalities. If the position of the holder remains the
same, the predetermined matrix can register the acquired tomo-
gramsfromdifferent modalitiesautomaticallyandindependently
of the imaged object. The geometric transformation between dif-
ferent modalities is simply a linear function approximation
problem, which can be solved using linear polynomial regres-
sion. The coordinates of the markers in a calibration phantom
from both modalities are used to determine the transformation
matrix between the two modalities. After the geometric relation-
ship of the two modalities has been derived, each voxel value of
the desired image can be identied by mapping its coordinates.
II. M
ATERIAL AND
METHODS
A. Rigid Body Transformation
The acquired tomograms were assumed to be related by a
rigid body transformation. Any given 3-D position vector pairs
and , where , and is the number of
voxels, are related by
(1)
where
and are 3 3 scaling and rotation matrices and
is a translation vector (3 1). The objective of registration is
to nd
, , and such that the sum of squared errors (SSE),
given by
SSE
(2)
is minimized.
The relationship between
and
, where superscript represents matrix transpo-
sition, can be estimated according to a polynomial equation
The values can be determined by differentiating the SSE
with respect to
and equating the results to zero to generate a
set of normal equations in matrix form
, such that
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
(3)
and
.
.
.
.
.
.
.
.
.
hold. and were derived from the coordinates of the corre-
sponding markers in the calibration phantom tomograms from
different modalities. These equations were then solved to deter-
mine
by matrix inversion using GaussJordan elimination.
888 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 24, NO. 7, JULY 2005
B. Scanners for Micro PET, Micro CT, and
Micro SPECT Imaging
Micro CT images were obtained using an X-SPECT/CT
animal imaging system (Gamma Medica, Northridge, CA).
This was the rst commercially available, dual modality,
SPECT and X-ray CT system for imaging animals in medical
research. The micro CT element of the X-SPECT/CT system
comprised a micro focus X-ray tube and a at-panel X-ray
detector. The X-ray source was a xed anode tube with 250
m
spot size. The at-panel detector was a 120-mm gadolinium
oxysulde/CMOS detector with 50
m pitch size. The animals
were positioned horizontally on a stationary bed and the X-ray
tube was rotated around the subjects. The distance from the
X-ray tube to detector was 298.0 mm, and the distance from the
X-ray tube to center of rotation was 225.0 mm. The operating
current and voltage of the X-ray tube were set to 0.5 mA and
50 kVp, respectively. Each scan had 256 projections with a
0.5-s exposure per projection. Tomograms were reconstructed
by applying the Feldkamp algorithm [12] on a volume of
512
512 512 voxels with isotropic 0.15-mm voxel size.
The compact gamma camera used in X-SPECT was based on
pixilated NaI(Tl) scintillators in combination with a position-
sensitive photomultiplier tube readout. The scintillator array had
a58
58 matrix of 2 2 6mm crystals. Two gamma cam-
eras were mounted 180
opposite the axis of rotation. Various
interchangeable pinhole and parallel-hole collimators, with var-
ious hole-sizes, ranging from 0.5 to 2.0 mm, were used to opti-
mize the resolution, the sensitivity and the eld of view for par-
ticular applications. The projections were reconstructed using
ordered subsets expectation maximization reconstruction soft-
ware [13]. For the X-SPECT/CT system, the object in the fu-
sion study did not move, and the registration was performed
using built-in spatial transformation. After registration, both the
SPECT and the CT tomograms had 256
256 256 voxels in
an isotropic 0.3-mm voxel size.
The PET data were obtained with a microPET R4 system
(Concorde Microsystems, Knoxville, TN). The detector mate-
rial of the scanner models was lutetium oxyorthosilicate (CTI,
Knoxville, TN), with dimensions of 2.1
2.1 10 mm and a
center-to-center distance of 2.4 mm, providing a spatial resolu-
tion of around 2 mm at the center of the tomogram [14]. For
all measurements, a coincident time window of 6 ns and an en-
ergy window of 350650 keV were used. After data histogram-
ming, 3-D sinogram datasets were reconstructed by 3-D repro-
jection [15], yielding a tomogram with 256
256 63 voxels.
The voxel size of the images was 0.25
0.25 1.22 mm .
C. Designing the Animal Holder and Calibration Phantom
An animal holder and a calibration phantom were designed
for registration to determine the spatial relation between the mi-
croPET and X-SPECT/CT datasets. The 80-mm-wide 200-mm-
long holder (Fig. 1) was used to transfer the object between sys-
tems. Alignment markers on both sides of the couch helped to
place the holder at a xed position in each scan. A bull-eye spirit
level was mounted on the holder to improve further the precision
of the positioning. While the holder was in position, the spatial
relation between the images of these two modalities was xed
and its transformation matrix remained constant.
Fig. 1. Animal holder was aligned with the markers on the couch. A bull-eye
spirit level was mounted on top of the holder to improve the precision of the
positioning.
Fig. 2. (a) The calibration phantom consisted of three line sources A, B, and
C. (b) A typical transaxial slice of the phantom images showed three bright
spots that can be identied as the positions of markers. Notably, source C is
always located between A and B. Let
, , and be the distances between
the markers AC, BC, and AB, respectively. The ratios of these parameters are
used to nd the corresponding markers in the target dataset.
To obtain the transformation matrices between the PET and
CT modalities, a calibration phantom placed on the holder was
scanned by the two scanners. The method assumes a rigid body
transformation. Although theoretically, three point markers
would provide a simpler method of uniquely determining the
transformation matrix, the limited resolution of images was such
that determining the exact locations of the matched markers was
not ensured. Errors arise will be introduced in the estimation
of both
and and, therefore, in the transformation matrix.
More markers are required to reduce these errors. Hill et al. [2]
revealed that the target registration error decreases as 1
,
where
is the number of point markers. In this paper, three-line
sources for 39 markers are used to determine the transformation
matrix. Placing three line sources as 39 markers would be much
easier and more practical than placing 39 point-source markers.
The calibration holder consisted of three glass capillary tubes
with an inner diameter of 1 mm and a length of 100 mm that were
placed on an acrylic housing. These three capillaries were lled
with
F 2-deoxy-2-uoro-D-glucose ( F -FDG) solution,
which appeared as bright spots in the transaxial image of the PET
output. The three capillaries were arranged [Fig. 2(a)] to maxi-
mize the spatial separations between each other and the central
capillary (C) was positioned between two side capillaries (A and
B). The capillaries appeared as white circles in the transaxial
slices of CT image and their centers were used to register spots in
PET images. The centers of the threecapillaries crossing selected
JAN et al.: AUTOMATED FUSION OF MICRO PET-CT-SPECT WHOLE-BODY IMAGES 889
slices in the reference image were located [Fig. 2(b)] and two po-
sition parameters (
, ) were calculated. The
parameters
were using to determine the corresponding
markers in target image. Registration between SPECT and CT is
already provided for Gamma Medicas X-SPECT/CT scanner,
so PET-SPECT fusion was also achieved with reference to CT
images after SPECT-CT registration.
D. Transformation Matrix
In this paper, PET images were selected as the reference
dataset. Thirteen equally spaced PET slices (uniformly dis-
tributed between
) were selected, and the locations
of three capillaries of these slices were used as the reference
markers. A total of 39 markers were therefore divided into
13 groups of three markers each. All the coordinates of the
markers could be determined for each selected slice in the
PET images. These markers in each group in CT were located
using the following method. Let
, be the coor-
dinates of the three markers (from left to right) on the same
slice
; their corresponding locations, ,
in the CT images are then determined. (Their
coordinates , , are not necessarily the same.) A two-step
process was employed to solve for
. An initial guess that ,
, and were equal to was made, and the central capillary
was located at
between two side capillaries. The slice
was easily found using the distance ratio as a criterion, such
that Err
is minimal, where , and
, are the position parameters measured from the capillary
locations in the PET and CT images, respectively. The second
step was to search for
again using the distance ratio as a
criterion within slices
, where is selected arbitrarily,
but should be sufciently large to cover possible errors in the
placing of the holder. The markers of the other 12 groups were
located using the same two-step process. The coordinates of
these 39 matched markers were then used to calculate the trans-
formation matrix for PET-CT fusion. The same method was
applied to the SPECT datasets (after SPECT-CT registration in
the Gamma Medica system) for PET-SPECT fusion.
After the holder had been calibrated, registration was per-
formed for animal studies. Anesthetized animals were secured
to the holder to prevent movement during scanning. The holder,
together with the animals, was carefully positioned on the
couches for PET, CT, and SPECT scanning. The transformation
matrices determined by scanning the holder and the calibration
phantom enabled the spatial relationship among modalities to
be interpolated for animal locations in SPECT, CT and PET
scanners. Following scanning, the CT or SPECT images were
transformed into the PET coordinate system. This was achieved
by scanning all voxels of the PET datasets and locating the
corresponding voxels in the CT dataset using (3). After the
transformation had been performed, a transparent overlay of
images of one modality on the other was generated for evalua-
tion, each with different color lookup tables. Fig. 3 presents a
owchart of this method.
III. R
ESULTS
Fig. 4 plots the coordinates of the locations of markers from
the images of the calibration phantom, as well as the coordi-
Fig. 3. Flowchart of proposed registration method.
Fig. 4. Regression results concerning source positions
in the calibration
phantom. The
coordinates of the source locations were projected onto the X-Y
plane shown here.
nates calculated using the transformation matrix. The markers
of both CT and PET in the fusion images were visually checked
and their registrations found to agree with each other closely.
The average SSE of the regression was 0.11 voxels. Notably,
the quality of the Feldkamp algorithm varies with the elevation
of the plane [12], [16]. For slices farther away from the cen-
tral plane, blurring in the axial direction occurs. The blurring
causes the exact positions of the markers to be difcult to iden-
tify. Therefore, about ten slices in the front and end sections of
the CT images were not used in the registration study.
The registration errors of this method come primarily from
the uncertainty in the alignment of the holder. The calibration
phantom together with the holder were repositioned at the same
location seven times for both CT and PET scanning to estimate
the holder-alignment errors. Position parameters (described in
Section II-D) of nine markers on three slices (
)
of a PET image were selected to evaluate holder-alignment er-
rors among PET scans. The selected slices represent three axial
regions (rostral, central, and caudal) of the images. The nine
corresponding markers with the same position parameters in the
other six PET images were located. Fixed position parameters
that corresponded to
were selected, and the
same procedure as PET was followed, to assess holder-align-
ment errors among CT scans. Table I presents the means and
standard deviations of three repositioned central markers (C) for
CT and PET. The deviations of these nine repositioned markers
reveal that the holder-alignment errors associated with PET and
CT are smaller than one voxel, except that of CT in the
-direc-
tion, because the couch of CT has no sensor. Therefore, the mo-
tions of the couch into scanned position were checked visually.
Registration errors were evaluated for various positional
setups for both PET and CT scans to test the robustness of
890 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 24, NO. 7, JULY 2005
TABLE I
H
OLDER-ALIGNMENT ERRORS
AMONG PET
AND CT S
CANS
the system. Again, the calibration phantom and the holder
assembly were randomly repositioned, but not beyond the eld
of view, seven times for PET scans. The same procedure was
followed for CT scans. Selecting the locations of the capillaries
in PET as the reference markers enabled the corresponding
positions in CT to be obtained either by the transformation
matrix or by the minimum position parameters method. In
these studies,
and , , ,
, were the coordinates of the th marker in CT,
obtained by the minimum position parameters method using
the
th CT and the th PET pairs, and using the th transfor-
mation matrix associated with the
th PET image, respectively.
The registration accuracy
in the th study is dened as
the maximum Euclidean distance difference of any pair of
the 39 coordinates of the markers for all seven PET images:
.
Fig. 5 shows the procedure for evaluating the registration
error in each CT study. The average registration errors in these
studies were 0.21 mm
0.14 mm, 0.26 mm 0.14 mm, and
0.45 mm
0.34 mm, in the (rightleft), (upperlower) and
(rostralcaudal) directions, respectively, for PET-CT fusion.
The same method was applied to the PET-SPECT fusion study,
and the registration errors were 0.24 mm
0.14 mm, 0.28 mm
0.15 mm, and 0.54 mm 0.35 mm, along the , , and
axes, respectively. These small registration errors indicate that
this method was quite robust. The registration error is higher in
the
direction than in either the or the directions, because
the holder-alignment error of CT is the highest along the
axis,
and the resolution of PET is the lowest along the
axis. Re-
gression relationships are valid only for values of the regressor
variable in the range of the original data. If the object is outside
the range of the calibration phantom, then extrapolations from
the regression variable deteriorate the image. In such a case, a
larger calibration phantom that covers the entire eld of view is
required.
A Derenzo-like phantom was used to conrm this method.
The phantom consisted of hot rods of various sizes (1.2-, 1.6-,
2.4-, 3.2-, 4.0-, and 4.8-mm diameter) arranged in six groups.
The spacing between adjacent rods in each group was twice the
rod diameter. The phantom was 40.0 mm in diameter and 36.0
Fig. 5. Procedure for calculating registration error in the fusion study. The
procedure was repeated for
.
mm in length. The Derenzo-like phantom was lled with 11.4
MBq
F -FDG and KI solution for easy visualization by both
PET and CT. The phantom was taped tightly onto the holder.
After a 10-min PET scan, the phantom and the holder together
were moved to the CT couch to undergo another 10-min scan.
Fig. 6 presents typical transaxial and coronal slices of registered
CT, PET, and fusion images from the phantom dataset. The fu-
sion images show the close match between the locations of the
hot spots on both images.
A seven-week-old male C57BL/6 mouse was injected intra-
venously with Lewis lung carcinoma cells (LLC1) 16 days be-
fore imaging. After 16 days, the mouse, now with tumors and
weighing 19.5 g, was injected with 9.8 MBq
F -FDG and
scanned by PET for 10 min from the time of injection, in a
single bed position. After the rst PET scan, the mouse, together
with the holder, was shifted to the CT modality for a 10-min
CT scan. When the CT scan was nished, the mouse was trans-
ferred back to the PET scanner for another 10-min PET scan.
The second-step PET scan was begun at 60 min after injection.
After all PET and CT scans had been completed, the mouse was
sacriced and the lungs and liver biopsied. For comparison, a fu-
sion study was conducted on a 22.0 g normal C57BL/6 mouse
injected with 9.4 MBq
F -FDG. After 60 min of uptake, the
mouse underwent a PET scan (10 min scan time) and a CT scan
(10 min scan time). Both the normal mouse and the mouse with
tumors were anaesthetized by intravenously injecting suitable
anesthetics through the tail vein to prevent motion during the
PET and CT scans. The animal experiments were approved by
the Institutional Animal Care and Use Committee at the Insti-
tute of Nuclear Energy Research in Taiwan.
Fig. 7 displays selected coronal slices from the registered CT
and the 60 min
F -FDG uptake PET datasets of the mouse
with tumors. Tumors in the lungs can be clearly interpreted with
the help of anatomic information from CT images. Comparing
Fig. 7 with the results of PET-CT registration of the normal
mouse (Fig. 8) clearly shows that no
F -FDG was taken up
in the ribcage of the normal mouse, except in the heart. Tu-
mors were present not only in the lungs but also in the liver
(Fig. 7). The CT images without using contrast agent did not
clearly show the precise location of the liver, so the tumors in
the liver were identied by images obtained from the two-step
PET scans. Fig. 9(a) and (b) shows the two PET images, repre-
senting the rst-step (010 min) and second-step (6070 min)
JAN et al.: AUTOMATED FUSION OF MICRO PET-CT-SPECT WHOLE-BODY IMAGES 891
Fig. 6. (Top) Transaxial slices and (bottom) coronal slices of registered (a) CT, (b) PET, and (c) fused images of the Derenzo-like phantom. The hot-rod diameters
in each of the six sectors are 4.8, 4.0, 3.2, 2.4, 1.6, and 1.2 mm. Notably, the hot spots (PET) t well the outlines of the bright rods (CT).
Fig. 7. (a) Anatomic information of a mouse obtained from CT. (b) Coronal slice displays the uptake of F -FDG in the mouse 1 h after injection. (c) The
locations of tumors (arrows) were identied with the help of anatomic information.
Fig. 8. Spatially registered coronal (a) CT and (b) PET images of a normal C57BL/6 mouse and (c) fused image. The ribcage exhibits no special F -FDG
uptake except in the heart.
F -FDG metabolic distributions of the mouse the same as in
Fig. 7, respectively. Notably, the registration between these two
PET images remained accurate even though the mouse under-
went CT scanning between these two PET studies. The liver can
be resolved in the rst-step PET images, but it was hardly vis-
ible in the second-step PET images. The
F -FDG uptake of
892 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 24, NO. 7, JULY 2005
Fig. 9. Spatially registered, coronal, two-step PET images of a C57BL/6 mouse with Lewis lung carcinoma. (a) and (b) present the F -FDG uptake distributions
of the mouse from 0 to 10 min and from 60 to 70 min after injection. (c) Fused two-step PET images can help to locate the tumor in the liver.
Fig. 10. (a) CT, (b) SPECT, and (c) PET coronal images of the (top) abdominal region and (bottom) legs of a rat. (d) The fused image of the three modalities
clearly resolves the heart, chest, ribs, liver, and kidneys, as well as the bright spot (
I -p607e) in the right leg. Most of the I -p607e remained in the initial
position for 14 days and remained away from the heart, liver, and kidneys.
the normal liver was washed out rapidly, except where tumors
were present. Fig. 9(c) displays the superimposed PET images,
from which the location of the tumor (arrow) can be identied
in the liver. The tumors in the lungs and liver were conrmed by
microscopic examination after the study.
An adult SpragueDawley rat that weighed 650 g was injected
with both I-131 and F-18 and scanned to perform fusion of the
three modalities (PET-SPECT-CT). The rat was injected with
9.25 MBq
I -p607e 14 days before the SPECT scan. The
I -p607e peptide, combined with an oil-based adjuvant, was
used as a water-in-oil emulsion and was injected intramuscularly
into the right leg of the rat to assay for stability. Medium-energy
pinhole collimators were used in this imaging. The scan time
was 80 min/bed
2 bed positions for 32 projections/bed, and
was followed by CT scans of 10 min/bed
2 bed positions.
The rotational center-to-aperture distance was set to 32.7 mm,
corresponding to a 92.4 mm eld of view. The rat and the holder
were transferred to the PET couch after the SPECT and CT scans
had been completed. A 10 min PET scan centered on the rats ab-
domen was started immediately after 16.65 MBq
F -FDG was
injected through the tail vein. The second 10 min PET scan was
focused on the legs and was begun immediately after the rst bed
scan was completed. The SPECT, CT, and PET tomograms were
reconstructed separately. Both the SPECT and CT reconstructed
images were registered using software provided by Gamma
Medica. Two datasets with 256
256 256 voxels were ob-
tained after registration. The method described above was then
used to perform the PET-CT and PET-SPECT registrations.
Fig. 10(a)(c) presents CT, SPECT, and PET coronal images,
respectively, of the (top) abdominal region and (bottom) legs
of the rat. Fig. 10(d) shows fused images of the three modali-
ties. The three-modality image, displayed in the upper panel of
Fig. 10(d), clearly presents the heart, chest, ribs, liver, and kid-
neys; the red spot [lower panel of Fig. 10(d)] in the right leg is
the
I -p607e that had been injected 14 days before scanning.
The fusion images of the PET-SPECT-CT modalities indicate
that most of the
I -p607e remained in the initial position and
was kept away from the heart, liver, and kidneys.
JAN et al.: AUTOMATED FUSION OF MICRO PET-CT-SPECT WHOLE-BODY IMAGES 893
IV. DISCUSSION
Interest is growing in multimodality fusion in investigations
of small experimental animals. For such investigations, methods
must be devised by which the results of each modality can be in-
terpreted with reference to those of the other modalities at every
point in their conjoint imaging space. Registration algorithms
based on internal or external markers enable accurate image reg-
istration that is not based on assumptions concerning image con-
tent. The small size of animals and the limited resolution of the
PET system are such that the images do not always clearly show
the internal markers, and user involvement is required to iden-
tify these markers. The use of external markers depends on a
complex setup in each study, and the transformation matrices
required for various objects must be recalibrated each time. The
external ducial markers might be unreliable in image registra-
tion. The skin position of the external markers does not corre-
late well with the body frame because skin has large freedom
of motion. Moreover, external markers can interfere with the
experimental design and have other practical shortcomings; for
example, attaching them rigidly to a living animal can be dif-
cult. The method presented herein considers the holder as an ex-
ternal frame, and this does not interfere with any experimental
design. The fusion work can be performed on any part of the
imaged body. The transformation matrix is the same for animal
studies, as long as the scanning holder is placed at xed loca-
tions. Therefore, the images acquired from various modalities
can be registered automatically using the predetermined matrix,
independently of the imaged object. The proposed method can
be used to perform fusion studies between any two modalities.
For instance, if capillaries of the calibration phantom are lled
with MR contrast agent, then multimodality studies including
MRI can be undertaken as well. This method is also appropriate
for making simultaneous multianimal comparisons, as long as
the scanning holder and calibration phantom are large enough.
The registration accuracy depends strongly on the trans-
formation matrix. The transformation matrix is determined
from the scanned datasets of the calibration phantom with the
holder. The preparation of the calibration phantom including the
three-line sources is simple and fast. In this study, 39 markers
were employed to determine the transformation matrix. To place
three-line sources for 39 markers would be much easier and more
practical than to place 39 point-source markers. The markers of
the phantom were easy to identify, so no physician is required to
assess the registration. The calibration procedure is fully auto-
matic and involves no user intervention. The coordinates using
the transformation matrix must be calculated without extrapola-
tion to minimize the registration error. Therefore, the calibration
phantom should be made as large as possible to t the eld of
view of the scanners, and the markers of the calibration phantom
would cover the full range of the scanned object to achieve
accurate registrations in three dimensions. The performance of
this registration method is independent of the quality of the CT,
PET, and SPECT images. Neither the tissue contrast nor the
statistical quality of the PET and SPECT images inuences the
registration accuracy, except in the imaging of the calibration
phantom to determine the transformation matrix. The animals
require no special preparation before imaging, and no extra
optimal data processing is required following imaging, so this
registration method is effective for routine fusion studies.
V. C
ONCLUSION
We have developed an automated registration method for an-
imal imaging among micro PET, micro CT, and micro SPECT
modalities that does not interfere in any way with normal image
acquisition. The predetermined transformation matrix, which
species the spatial relationship among various modalities, can
be used to register the acquired datasets, independently of the
imaged object. The transformation matrix is easily obtained
using the calibration phantom and the holder. The calibration
procedure is automatic and requires no user interaction. The
method is simple and does not depend on any preparation and
postprocessing. It is appropriate for routine fusion studies with
mass scanning.
A
CKNOWLEDGMENT
The authors would like to thank C.-H. Yeh for his assistance
with the data acquisitions of CT and SPECT; and Dr. W. Lin and
C.-S. Shyu for their help with the
F -FDG preparation.
R
EFERENCES
[1] S. R. Cherry and S. Gambhir, Use of positron emission tomography in
animal research,Inst. Lab. Animal Res. J., vol. 42, pp. 219232, 2001.
[2] D. L. G. Hill, P. G. Batchelor, M. Holden, and D. J. Hawkes, Medical
image registration,Phys. Med. Biol., vol. 46, p. R1-45, 2001.
[3] M. W. Wilson and J. M. Mountz, A reference system for neuroanatom-
ical localization on functional reconstructed cerebral images, J.
Comput. Assist. Tomogr., vol. 13, pp. 174178, 1989.
[4] R. L. Phillips, E. D. London, J. M. Links, and N. G. Cascella, Pro-
gram for PET image alignment: effect of calculated differences in cere-
bral metabolic rates for glucose,J. Nucl. Med., vol. 31, pp. 20522057,
1990.
[5] L. R. Schad, R. Boesecke, W. Schilegel, G. H. Hartmann, V. Sturm, L.
G. Strauss, and W. J. Lorenz, Three dimensional image correlation of
CT, MR and PET studies in radiotherapy planning of brain tumors,J.
Comput. Assist. Tomogr., vol. 11, pp. 948954, 1987.
[6] M. L. J. Apuzzo, P. T. Chandirasoma, D. Cohen, C. S. Zee, and V.
Zelman, Computed imaging stereotaxy: experience and perspective re-
lated to 500 procedures applied to brain masses, Neurosurgery, vol. 20,
pp. 930937, 1987.
[7] J. Zhang, M. F. Levesque, C. L. Wilson, R. M. Harper, J. Engel, R.
Lufkin, and E. J. Behnke, Multimodality imaging of brain structures
for stereotactic surgery,Radiology, vol. 175, pp. 435441, 1990.
[8] D. L. G. Hill, D. J. Hawkes, J. E. Crossman, M. J. Gleeson, T. C. S. Cox,
and L. Bracey et al., Registration of MR and CT images for skull base
surgery using point-like anatomical features, Br. J. Radiol., vol. 64, pp.
10301035, 1991.
[9] N. Hayakawa, K. Uemura, K. Ishiwata, Y. Shimada, N. Ogi, and T. Na-
gaoka et al., A PET-MRI registration technique for PET studies of the
rat brain, Nucl. Med. Biol., vol. 27, pp. 121125, 2000.
[10] D. Stout, P. Chow, B. Silverman, R. Leahy, X. Lewis, S. Gambhir, and
A. Chatziioannou, Creating a whole body digital mouse atlas with
PET, CT and cryosection images, presented at the AMI Ann. Conf.,
San Diego, CA, Oct. 2327, 2002.
[11] J. J. Vaquero, M. Desco, J. Pascau, A. Santos, I. Lee, J. Seidel, and M. V.
Green, PET, CT, and MR image registration of the rat brain and skull,
IEEE Trans. Nucl. Sci., vol. 48, no. 4, pp. 14401445, Aug. 2001.
[12] L. A. Feldkamp, L. C. Davis, and J. W. Kress, Practical cone-beam
algorithm,J. Opt. Soc. Am. A, vol. 1, pp. 612619, 1984.
[13] K. Iwata, B. E. Patt, J. Li, K. B. Parnham, T. Vandehei, and J. S. Iwanczyk
et al., Dedicated small animal MicroSPECT/CT system with stationary
horizontal animal position,presented at the IEEE NSS-MIC, Portland,
OR, Oct. 1925, 2003.
[14] C. Knoess, S. Siegel, A. Smith, D. Newport, N. Richerzhagen, and
A. Winkeler et al., Performance evaluation of the microPET R4
PET scanner for rodents, Eur. J. Nucl. Med. Mol. Imag., vol. 30, pp.
737747, 2003.
[15] P. E. Kinahan and J. G. Rogers, Analytic three-dimensional image re-
construction using all detected event,IEEE Trans. Nucl. Sci., vol. 36,
no. 1, pp. 964968, Feb. 1989.
[16] B. D. Smith, Image reconstruction from cone-beam projection: neces-
sary and sufcient conditions and reconstruction methods, IEEE Trans.
Med. Imag., vol. MI-4, pp. 1425, 1985.
... This can be compensated by multiple imaging modalities. Various existing image fusion techniques using PET with other modalities are MRI-CT-PET (Jan et al., 2005), PET-CT and PET-CT-ultrasound, etc. ...
... In other words, the fused image not only retains the spatial structure information of MRI, but also preserves the color information of PET. As a result, this kind of complementary information can assist clinical diagnosis and treatment of diseases better [4], [5]. ...
Article
Full-text available
Positron emission tomography (PET) has rich pseudo color information that reflects the functional characteristics of tissue, but lacks structural information and its spatial resolution is low. Magnetic resonance imaging (MRI) has high spatial resolution as well as strong structural information of soft tissue, but lacks color information that shows the functional characteristics of tissue. For the purpose of integrating the color information of PET with the anatomical structures of MRI to help doctors diagnose diseases better, a method for fusing brain PET and MRI images using tissue-aware conditional generative adversarial network (TA-cGAN) is proposed. Specifically, the process of fusing brain PET and MRI images is treated as an adversarial machine between retaining the color information of PET and preserving the anatomical information of MRI. More specifically, the fusion of PET and MRI images can be regarded as a min-max optimization problem with respect to the generator and the discriminator, where the generator attempts to minimize the objective function via generating a fused image mainly contains the color information of PET, whereas the discriminator tries to maximize the objective function through urging the fused image to include more structural information of MRI. Both the generator and the discriminator in TA-cGAN are conditioned on the tissue label map generated from MRI image, and are trained alternatively with joint loss. Extensive experiments demonstrate that the proposed method enhances the anatomical details of the fused image while effectively preserving the color information from the PET. In addition, compared with other state-of-the-art methods, the proposed method achieves better fusion effects both in subjectively visual perception and in objectively quantitative assessment.
... Among these imaging modality, PET has excellent sensitivity and specificity, but it has disadvantages of using radioisotope and low spatial resolution and tissue contrast [4][5]. MRI devices have the advantage of acquiring images with high resolution and tissue contrast without using ionizing radiation [6][7][8][9][10][11][12][13]. However, it has disadvantage that acquisition process of the signal is relatively longer were synthesized using the seed-mediated growth method. ...
... Among these imaging modality, PET has excellent sensitivity and specificity, but it has disadvantages of using radioisotope and low spatial resolution and tissue contrast [4][5]. MRI devices have the advantage of acquiring images with high resolution and tissue contrast without using ionizing radiation [6][7][8][9][10][11][12][13]. However, it has disadvantage that acquisition process of the signal is relatively longer were synthesized using the seed-mediated growth method. ...
Chapter
Positron emission tomography (PET) has been proven to be a powerful tool for in vivo small-animal imaging. Using PET, it is possible to study the cellular and molecular bases of disease in its natural state and guide the discovery and development of new molecular-based treatments. Early PET systems were developed as standalone modalities, but, in recent years, it has become commonplace to combine PET with other modalities to acquire complementary information. This chapter reviews some early system designs as well as newer multimodal designs, describes methods for integrating the modalities through software and hardware, and presents potential benefits and drawbacks of certain designs imposed in the process of developing preclinical multimodal imaging systems.
Article
The gastric cancer (GC) is biologically and genetically heterogeneous with a poorly understood carcinogenesis at the molecular level. Herein, we studied the effects of probiotics (Lactobacillus rhamnosus) on subcutaneous implantation of xenograft GC. Moreover, the effect of probiotics (L. rhamnosus) was compared with the capecitabine drug as known used drug against GC. Human GC tissue was obtained from patients with gastric adenocarcinoma and grafted into mice armpit. Probiotic (L. rhamnosus) was given to animals by gavage 2 weeks prior to GC and 4 weeks after GC induction. Also, capecitabine was orally added through feeding tube at the last week of treatment procedure. All grafted animals received cyclosporine a day before the surgery and during the study period to prevent graft rejection. Capecitabine–probiotic complex reduced the size of the axillary implanted GC when compared with control group. Furthermore, combination of capecitabine and probiotic increased apoptotic and necrotic responses in the grafted tumor, blood cells (red blood cells, white blood cells, and platelet counts) in comparison with capecitabine. Probiotic (L. rhamnosus) administration effectively improved the therapeutic index and outcomes, and also, improved the therapeutic effects of the capecitabine.
Article
Purpose This paper aims to present a prototype of medical transportation robot whose positioning accuracy can reach millimeter-level in terms of patient transportation. By using this kind of mobile robot, a fully automatic image diagnosis process among independent CT/PET devices and the image fusion can be achieved. Design/methodology/approach Following a short introduction, a large-load 4WD-4WS (four-wheel driving and four-wheel steering) mobile robot for carrying patient among multiple medical imaging equipments is developed. At the same time, a specially designed bedplate with self-locking function is also introduced. For further improving the positioning accuracy, the authors proposed a calibration method based on Gaussian process regression (GPR) to process the measuring data of the sensors. The performance of this robot is verified by the calibration experiment and Image fusion experiment. Finally, concluding comments are drawn. Findings By calibrating the robot’s positioning system through the proposed GPR method, one can obtain the accuracy of the robot’s offset distance and deflection angle, which are 0.50 mm and +0.21°, respectively. Independent repeated trials were then set up to verify this result. Subsequent phantom experiment shows the accuracy of image fusion can be accurate within 0.57 mm in the front-rear direction and 0.83 in the left-right direction, respectively, while the clinical experiment shows that the proposed robot can practically realize the transportation of patient and image fusion between multiple imaging diagnosis devices. Practical implications The proposed robot offers an economical image fusion solution for medical institutions whose imaging diagnosis system basically comprises independent MRI, CT and PET devices. Also, a fully automatic diagnosis process can be achieved so that the patient’s suffering of getting in and out of the bed and the doctor’s radiation dose can be obviated. Social implications The general bedplate presented in Section 2 that can be mounted on the CT and PET devices and the self-locking mechanism has realized the catching and releasing motion of the patient on different medical devices. They also provide a detailed method regarding patient handling and orientation maintenance, which was hardly mentioned in previous research. By establishing the positioning system between the robot and different medical equipment, a fully automatic diagnosis process can be achieved so that the patient’s suffering of getting in and out of the bed and the doctor’s radiation dose can be obviated. Originality/value The GPR-based method proposed in this paper offers a novel method for enhancing the positioning accuracy of the industrial AGV while the transportation robot proposed in this paper also offers a solution for modern imaging fusion diagnosis, which are basically predicated on the conjoint analysis between different kinds of medical devices.
Conference Paper
In this study, we compare two commonly used image reconstruction methods which are the filtered back projection (FBP) and the maximum likelihood expectation maximization (ML-EM) on some medical and phantom image with noise. To evaluate those methods, we used one evaluation measurement which is called a peak signal-to-noise ratio. It is most commonly used to measure the quality of reconstruction. In this experiment, the methods are tested with two images of computer tomography, two phantom images, and one SPECT images. Experimental result shows that FBP and ML-EM are closely similar result but MLEM is better than FBP in noisy images.
Article
Full-text available
The authors present the results of testing an algorithm for three-dimensional image reconstruction that uses all gamma-ray coincidence events detected by a PET (positron emission tomography) volume imaging scanner. By using two iterations of an analytic filter-backprojection method, the algorithm is not constrained by the requirement of a spatially invariant detector point spread function, which limits normal analytic techniques. Removing this constraint allows the incorporation of all detected events, regardless of orientation, which improves the statistical quality of the final reconstructed image
Article
Full-text available
A convolution-backprojection formula is deduced for direct reconstruction of a three-dimensional density function from a set of two-dimensional projections. The formula is approximate but has useful properties, including errors that are relatively small in many practical instances and a form that leads to convenient computation. It reduces to the standard fan-beam formula in the plane that is perpendicular to the axis of rotation and contains the point source. The algorithm is applied to a mathematical phantom as an example of its performance.
Article
Full-text available
We have developed a registration technique for combining magnetic resonance imaging (MRI) and computed tomography (CT) images of the skull base for use in surgical planning. The technique is based on user identification of point-like landmarks visible in both modalities. The combination of images involves a small amount of expert interaction, is relatively quick and preliminary evaluation indicates that it is accurate to within 1.5 mm. Registered or fused images can be viewed either on an image processing workstation, or fused images can be printed onto conventional film for convenience in clinical use. We present one patient in order to demonstrate the technique's indications and advantages.
Article
Full-text available
A program was developed to align positron emission tomography images from multiple studies on the same subject. The program allowed alignment of two images with a fineness of one-tenth the width of a pixel. The indications and effects of misalignment were assessed in eight subjects from a placebo-controlled double-blind crossover study on the effects of cocaine on regional cerebral metabolic rates for glucose. Visual examination of a difference image provided a sensitive and accurate tool for assessing image alignment. Image alignment within 2.8 mm was essential to reduce variability of measured cerebral metabolic rates for glucose. Misalignment by this amount introduced errors on the order of 20% in the computed metabolic rate for glucose. These errors propagate to the difference between metabolic rates for a subject measured in basal versus perturbed states.
Article
colon; A treatment planning system for stereotactic convergent beam irradiation of deeply localized brain tumors is reported. The treatment technique consists of several moving field irradiations in noncoplanar planes at a linear accelerator facility. Using collimated narrow beams, a high concentration of dose within small volumes with a dose gradient of 10-15%/mm was obtained. The dose calculation was based on geometrical information of multiplanar CT or magnetic resonance (MR) imaging data. The patient's head was fixed in a stereotactic localization system, which is usable at CT, MR, and positron emission tomography (PET) installations. Special computer programs for correction of the geometrical MR distortions allowed a precise correlation of the different imaging modalities. The therapist can use combinations of CT, MR, and PET data for defining target volume. For instance, the superior soft tissue contrast of MR coupled with the metabolic features of PET may be a useful addition in the radiation treatment planning process. Furthermore, other features such as calculated dose distribution to critical structures can also be transferred from one set of imaging data to another and can be displayed as three-dimensional shaded structures. (C) Lippincott-Raven Publishers.
Article
An image analysis system was developed for stereotactic neurosurgery that allows the simultaneous display of brain images from different imaging devices obtained in different orientations. The system is based on a stereotactic frame and a microcomputer and features an easy user interface together with point registration and region of interest analysis in three-dimensional space. A dynamic multi-image environment allows for simultaneous display of magnetic resonance, computed tomography, digital subtraction angiography, and positron emission tomography images in multiple windows, adjusted for common coordinates with reference to stereotactic frame fiducial markers. Linkages between images allow information interchange between different modalities and different views: Points and regions defined in one image can be transferred to others, and cursor coordinates in one image can be calculated and dynamically projected in other images. Phantom studies show that the system distortions are minor and that the system is suitable for clinical use. The system provides exceptional advantages over previous imaging procedures for stereotactic surgery.
Article
The correlation of anatomical with functional brain images is of critical importance for data analysis. We have designed and tested a reference system method that provides accurate localization of neuroanatomy on functional brain images. The method differs from prior techniques using headholding devices in that it does not require head fixation and is noninvasive. The reference system can be quickly, accurately, and reproducibly applied to the patient's head with respect to the surface anatomy of the head. The reference system is easily adaptable to multiple imaging modalities. The reference system is implemented with the aid of a computer algorithm that reconstructs the anatomic images into a new image matching a selected functional image. Accuracy and reproducibility of placement of the reference system was demonstrated by the application of the system on multiple subjects by two independent observers. The use of this method provides accurate correlation between functional and anatomical brain images.
Article
The evolution of more sophisticated imaging techniques has initiated a renewed interest in stereotactic devices, methods, and applications. The Brown-Roberts-Wells instrument was available to us early in its prototype stage, and this report reviews the first 500 cases using the system at the University of Southern California Medical Center Hospitals. Procedures were undertaken after recognition of apparent structural alterations on imaging studies, with objectives being both diagnostic and therapeutic. Target locations were predominantly within the cerebral centrum-basal ganglia (284 cases) and diencephalic-mesencephalic regions (129 cases). Operative objectives included: histological and microbiological assay, cyst and abscess aspiration, installation of temporary or permanent drainage conduits, point source and colloid base brachytherapy, cerebroscopy and ventriculoscopy with biopsy, aspiration, and excision, and intraoperative vascular localization. Using multiple instrumentation at the target point (741 point placements), we realized procedural objectives in 95.6% of the cases. The mortality was 0.2% and the morbidity was 1%: hematoma, 2 cases; infection, 1 case; increased deficit, 1 case; intraprocedural seizure, 1 case. A specific diagnosis was not obtained in 4.4% (necrosis, 10 cases; inflammatory response, 9 cases; granuloma, 1 case; gliosis, 1 case; diagnostic error, 1 case). Individual guidelines for case selection, technique, institutional requirements, and applications of the method are discussed.(ABSTRACT TRUNCATED AT 250 WORDS)