FIGURE 5 - uploaded by Klaus Mueller
Content may be subject to copyright.
1: Perspective voxel-driven splatting: First, the footprint polygon of voxel v x,y is mapped onto the image plane, then the affected image pixels p i ...p i+4 are mapped back onto the footprint table.

1: Perspective voxel-driven splatting: First, the footprint polygon of voxel v x,y is mapped onto the image plane, then the affected image pixels p i ...p i+4 are mapped back onto the footprint table.

Source publication
Article
Full-text available
Cone-beam computed tomography (CT) is an emerging imaging technology, as it providesall projections needed for three-dimensional (3D) reconstruction in a single spin of the Xraysource-detector pair. This facilitates fast, low-dose data acquisition as required forimaging fast moving objects, such as the heart, and intra-operative CT applications. Cu...

Contexts in source publication

Context 1
... now Figure 5.1, where we illustrate a new and accurate solution for perspective voxel-driven splatting. ...
Context 2
... the footprint polygon is placed orthogonal to the vector starting at the eye and going through the center of v x,y,z . Note that this yields an accurate line integral only for the center ray, all other rays traverse the voxel kernel function at a slightly different orientation than given by the place- ment of the 2D (1D in Figure 5.1) footprint polygon in object space. ...
Context 3
... plane ter ray (the vector source-v x,y,z ). From this equation we compute two orthogonal vectors u and w on the plane (only u is shown in Figure 5.1). Here, u and w are chosen such that they project onto the two major axes of the image. ...
Context 4
... u and w are chosen such that they project onto the two major axes of the image. Using u and w, we can compute the spatial x,y,z positions of the four footprint polygon vertices in object space (V Right (v x,y ) and V Left (v x,y ) in the 2D case depicted in Figure 5.1). These four vertices are perspectively pro- jected onto the image plane. ...
Context 5
... a pixel ray is shot into the 3D field of interpolation kernels, it stops at each slice and determines the range of voxel kernels within the slice that are traversed by the ray. This is shown in Figure 5.2a for the 2D case: The ray originating at pixel p i pierces the volume slice located at x s at y=y(i,x s ). ...
Context 6
... point of the ray at y(i,x s ), respectively (see Figure 5.2b). One finds: ...
Context 7
... now Figure 5.3. The coordinates of an image pixel can be expressed as p ij =image_origin+iu+jv, where u, v are the orthogonal image plane axis vectors. ...
Context 8
... ray grid sampling rate ω r is then a 2D vector (ω ru , ω rv ) that is related to the local sheet spacings. Figure 5.3 illustrates how ω rv is calculated. ...
Context 9
... due to the fact that u lies in the x-z plane and v is aligned with the y-axis of the volume grid, we may employ a simpler method that does not compute the perpendicular distance between two adjacent cutting planes, but uses the slice-projected distance between two adjacent cutting planes as they intersect with the volume slice most parallel to the projection plane. This approximation is shown for the 2D case in Figure 5.4, (we have also looked at this approximation, in a slightly different context, in Section 4.4). ...
Context 10
... this purpose, we designed a scheme that keeps two active slabs, composed of sheets of voxel cross-sections. These voxel cross-sections are formed by intersecting the voxel ker- nels by consecutive horizontal cutting planes (recall Figure 5.3). In this scheme, one active slab, slab p , is composed of voxels that are currently projected, while the other, slab b , is composed of currently backprojected voxels. ...
Context 11
... section addresses the last error in the list given above. Figure 5.5 shows, for the 2D case, the footprint polygon being aligned perpendicularly to the center ray. ...
Context 12
... section addresses the other two errors in the list given above. Figure 5.6 compares (for the 2D case) the projection of a footprint polygon of a voxel kernel located at (x v , y v ). The footprint polygon has extent 2⋅ext and is projected onto the projection plane, located at x s on the viewing axis. ...
Context 13
... that we are only considering voxels within a spherical reconstruction region (a circle in 2D). Hence, the error is largest along the boundary of this sphere, where ϕ c is given by the following expression (see also Figure 5.7): ...
Context 14
... the angle ϕ c given by equation (5.11) into equation (5.10) yields the maximum normalized absolute error due to the non-perpendicular alignment of the footprint polygons in the context of 3D reconstruction. This error is plotted in Figure 5.8 for x≤x ctr (and n=128, γ=30˚, ext=2.0). We observe that the largest error is close to 0.8 pixels. ...

Similar publications

Article
Full-text available
We describe a CME event, occurred in NOAA 11059 on April 3 2010, using STEREO and MDI/SOHO data. We analyze the CME evolution using data provided by SECCHI-EUVI and COR1 onboard STEREO satellites, and we perform a D reconstruction of the CME using the LCT-TP method. Using MDI/SOHO line-of-sight magnetograms we analyze the magnetic configuration of...
Article
Full-text available
While recent deep neural networks have achieved promising results for 3D reconstruction from a single-view image, these rely on the availability of RGB textures in images and extra information as supervision. In this work, we propose novel stacked hierarchical networks and an end to end training strategy to tackle a more challenging task for the fi...

Citations

... The calculation of the MART reconstruction follows the method described in Mueller (1998) [24] via the so called raycasting or splatting procedure. As shown in his work, the splatting procedure is more accurate and is therefore used herein. ...
... The calculation of the MART reconstruction follows the method described in Mueller (1998) [24] via the so called raycasting or splatting procedure. As shown in his work, the splatting procedure is more accurate and is therefore used herein. ...
Article
Full-text available
The calibration of a multi-camera system for volumetric measurements is a basic requirement of reliable 3D measurements and object tracking. In order to refine the precision of the mapping functions, a new, tomographic reconstruction-based approach is presented. The method is suitable for Volumetric Particle Image Velocimetry (PIV), where small particles, drops or bubbles are illuminated and precise 3D position tracking or velocimetry is applied. The technique is based on the 2D cross-correlation of original images of particles with regions from a back projection of a tomographic reconstruction of the particles. The off-set of the peaks in the correlation maps represent disparities, which are used to correct the mapping functions for each sensor plane in an iterative procedure. For validation and practical applicability of the method, a sensitivity analysis has been performed using a synthetic data set followed by the application of the technique on Tomo-PIV measurements of a jet-flow. The results show that initial large disparities could be corrected to an average of below 0.1 pixels during the refinement steps, which drastically improves reconstruction quality and improves measurement accuracy and reliability.
... . The geometrical configuration of the experiment is such that the X-ray beam can be considered as parallel since its angle of divergence is less than 1 deg. [26]. The sinogram of the sample is shown in Fig. 3 a. ...
... In such cases, the actual resolution at which a 3D image can be reconstructed is lower than the theoretical maximum based on the detector resolution [7]. When using standard reconstruction algorithms, such as the well-known Filtered Backprojection method (FBP) or Algebraic Reconstruction Technique (ART), the resolution of the reconstructed volume is inherently limited by the resolution and signal-to-noise ratio of the acquired projection data [8,9]. ...
Article
Full-text available
In tomography, the resolution of the reconstructed 3D volume is inherently limited by the pixel resolution of the detector and optical phenomena. Machine learning has demonstrated powerful capabilities for super-resolution in several imaging applications. Such methods typically rely on the availability of high-quality training data for a series of similar objects. In many applications of tomography, existing machine learning methods cannot be used because scanning such a series of similar objects is either impossible or infeasible. In this paper, we propose a novel technique for improving the resolution of tomographic volumes that is based on the assumption that the local structure is similar throughout the object. Therefore, our approach does not require a training set of similar objects. The technique combines a specially designed scanning procedure with a machine learning method for super-resolution imaging. We demonstrate the effectiveness of our approach using both simulated and experimental data. The results show that the proposed method is able to significantly improve resolution of tomographic reconstructions.
... The geometrical configuration of the experiment is such that the X-ray beam can be considered as parallel since its angle of divergence is less than 1 deg. [30]. No filter or other optics was used to produce a monochromatic beam, so the work mode was polychromatic. ...
Article
Full-text available
Standard approaches to tomography reconstruction of the projection data registered with polychromatic emission lead to the appearance of cupping artifacts and irrelevant lines between regions of strong absorption. The main reason for their appearance is the fact that part of the emission with low energy is being absorbed entirely by high absorbing objects. This fact is known as beam hardening (BH). The procedure of processing projection data collected in polychromatic mode is presented; it reduces artifacts relevant to BH and does not require additional calibration experiments. The procedure consists of two steps: the first is to linearize the projection data with one-parameter power correction, and the second is to reconstruct the images from linearized data. Automatic parameter adjustment is the main advantage of the procedure. The optimization problem is formulated. The system flowchart is presented. The reconstruction with different powers of correction is considered to evaluate the quality reconstruction.
... Geht man von letzterer Definition, also einem festen zu definierenden Wert aus, so muss nach [20]der Wert von zwischen 0 und 2 liegen. Für jedes Verfahren und zugrundeliegender Datenakquisemethode (SPECT, PET, CT, heat, usw.) werden dabei diverse Werte vorgeschlagen [1,27,28,30] und darauf hingewiesen, dass die Wahl des Relaxationsparameters auch vom Rekonstruktionsvolumen und dem Rauschlevel der Bilddaten abhängt [27]. Es gibt auch Ansätze, relevante Parametersätze für die Rekonstruktion mit einem durch maschinelles Lernen trainierten Algorithmus zu bestimmen. ...
... Sieht man  als variablen Relaxationsfaktor, wie er in der Literatur auch häufiger genannt wird, so bieten sich viele Möglichkeiten der Variation an. In [28] wird z. B. ein linearer Anstieg des Relaxationsfaktors über den Verlauf des Iterationsprozesses vorgeschlagen. ...
Thesis
Full-text available
Die Computertomographie hat sich als wichtiges Werkzeug zur bildgebenden zerstörungsfreien Untersuchung von Strukturen im Inneren von Objekten in vielen Bereichen etabliert. Nicht nur in der Medizin, sondern auch im Qualitätsmanagement, im Sicherheitssektor oder auch in der Materialforschung ist sie ein anerkanntes Verfahren, um Informationen über strukturelle Eigenschaften unter der Oberfläche der untersuchten Objekte zu erhalten, ohne diese zuvor öffnen und dabei eventuell schädigen oder verändern zu müssen. Mit den breiten Aufgabenfeldern sind auch die Anforderungen an die CT gewachsen, insbesondere was die Reduktion von Artefakten im Rekonstruktionsergebnis angeht. Die vorliegende Arbeit adressiert die Reduktion von solchen Artefakten, insbesondere denen, die sich als artifizelle Strukturen niederschlagen. Zum einen wird untersucht, wie sich unterschiedliche Aufnahmegeometrien auf die Verteilung und Ausprägung von Artefakten im Rekonstruktionsvolumen auswirken. Zum anderen wird durch die Erweiterung und Anpassung der Rekonstruktionsalgorithmik ein Mechanismus zu Vermeidung der Artefaktbildung während der Rekonstruktion eingeführt, charakterisiert und gegenüber anderen etablierten Verfahren bewertet. Dieser beruht auf der Gewichtung bestimmter einzelner Eingangsdaten anhand deren Einfluss auf die Bildung bestimmter Artefakttypen. Im Rahmen der Arbeit konnte gezeigt werden, dass die entwickelten Methoden eine Reduktion insbesondere der Metallartefakte und von Kanteneffekten, aber auch Strahlaufhärtungs- und Ringartefakten bewirken.
... Starting from an initial guess for the reconstructed object, SART performs a sequence of iterative grid projections and correction back-projections until the reconstruction has converged [6]. An update of the current image is performed after all rays in one projection are processed. ...
... SART has many advantages over FDK, such as better noise tolerance and handling of sparse and non-uniformly distributed projection datasets. However, computation time is considerably higher [6]. ...
Conference Paper
Full-text available
In C-arm computed tomography there are certain constraints due to the data acquisition process which can cause limited raw data. The reconstructed image’s quality may significantly decrease depending on these constraints. To compensate for severely under-sampled projection data during reconstruction, special algorithms have to be utilized, more robust to such ill-posed problems. In the past few years it has been shown that reconstruction algorithms based on the theory of compressed sensing are able to handle incomplete data sets quite well. In this paper, the iterative iTV reconstruction method by Ludwig Ritschl et al. is analyzed regarding it’s elimination capabilities of image artifacts caused by incomplete raw data with respect to the settings of it’s various parameters. The evaluation of iTV and the data dependency of iterative reconstruction’s parameters is conducted in two stages. First, projection data with severe angular under-sampling is acquired using an analytical phantom. Proper reconstruction parameters are selected by analyzing the reconstruction results from a set of proposed parameters. In a second step multiple phantom data sets are acquired with limited angle geometry and a small number of projections. The iTV reconstructions of these data sets are compared to short-scan FDK and SART reconstruction results, highlighting the distinct data dependence of the iTV reconstruction parameters.
... Other implementations ensure iterative reconstruction techniques and low-dose protocols. 24 It is generally accepted that reconstructions from circular orbits are insufficient for accurate reconstructions of the volume, 25,26 which is mathematically proven by violation of the Tuy condition, 27 which requires that every plane intersecting the object under study must intersect the focal trajectory. 27 The Feldkamp et al 7 algorithm itself only approximates the line integrals of the basic principles of the Radon transform. ...
Article
Objectives: Spatial resolution is one of the most important parameters objectively defining image quality, particularly in dental imaging where often fine details have to be depicted. Here, we review the current status on assessment parameters for spatial resolution and on published data regarding spatial resolution in CBCT-images. Methods: The current concepts of visual (line-pair-measurements) and automated (modulation transfer function, MTF) assessment of spatial resolution in CBCT-images are summarized and reviewed. Published measurement data on spatial resolution in CBCT are evaluated and analyzed. Results: Effective (i.e., actual) spatial resolution available in CBCT-images is being influenced by the 2D-detector, the 3D-reconstruction process, patient movement during the scan and various other parameters. In the literature, the values range between 0.6 lp/mm and 2.8 lp/mm (visual assessment, median: 1.7 lp/mm) versus MTF(range: 0.5 cycles/mm to 2.3 cycles/mm median: 2.1 lp/mm). Conclusions: Spatial resolution of CBCT-images is approximately one order of magnitude lower than that of intraoral radiographs. Considering movement, scatter effects and other influences in real-world scans of living patients, a realistic spatial resolution of just above 1 lp/mm could be expected.
... In this algorithm a full Starting at an initial guess µ 0 1,2 the solution, which is located at the intersection of the red and blue line, is iteratively approximated by casting orthogonal projections from line to line. The ART algorithm works analogously [63] with the difference that orthogonal projections are cast onto hyperplanes generalized to N dimensions. ...
... Starting at an initial guess µ 0 1,2 the solution, which is located at the intersection of the red and blue line, is iteratively approximated by casting orthogonal projections from line to line. The ART algorithm works analogously [63] with the difference that orthogonal projections are cast onto hyperplanes generalized to N dimensions. 6.5 Sinogram of an object imaged in cone beam geometry. ...
Thesis
Full-text available
Spectroscopic photon-counting X-ray detectors, such as the Medipix detector, offer new prospects to computed tomography. Besides a high conversion efficiency these detectors feature the capability to measure the energy of impinging X-ray quanta. This additional spectroscopic information allows to correct for beam-hardening and scattering effects but also to obtain a material- dependent spectroscopic attenuation map of a sampled volume. The work at hand addresses the problem of tomographic volume reconstruction from projection data obtained by such detectors. Volumetric data in computed tomography are extracted by means of volume reconstruction algorithms. Despite a significantly increased computational complexity in this work iterative reconstruction is employed in the place of filtered backprojection (FB), which is the standard volume reconstruction procedure in computed tomography. However the increased computational cost of iterative econstruction is compensated for by the high flexibility of iterative reconstruction which, among other effects, allows to directly take into account polychromacy, beam hardening and scattering. Within the current work a fully operational reconstruction software has been developed and tested with synthetic and experimental data. The algorithms implemented comprise ART as well as maximum likelihood type algorithms. The pixel pitch of current Medipix3 detectors is 55/110μm respectively, depending on the pixel-layout. To allow for system resolutions in the range of few μm strong magnification must be employed. To this purpose a software imaging system, a so called (distance driven) raytracer, designed for cone beam imaging has been implemented as part of this work. Defective pixels, due to the complex pixel electronics of photon counting detectors, are a constant nuisance. In order not to introduce artificial information removal of defective pixels within the reconstruction routine is achieved without interpolation of raw-data. Besides standard filtering methods a statistical pixel-filter capable of detecting defective pixels even if a malfunction occurs only temporarily during measurement is presented. Based on the recent theory of compressive sensing a new alternating-direction total variation constrained reconstruction algorithm is presented. This algorithm is shown to yield results superior to those of unregularized iterative reconstruction algorithms, especially in cases of heavy undersampling and truncation of projection data. Evaluation of experimental data demonstrates that a good contrast at a spatial resolution of 5μm, which is the highest spatial resolution achievable due to the focal spot size of the X-ray tube used, is obtained by this newly developed reconstruction routine.
... The purpose of our study was to implement a tomosynthesis algorithm based on simultaneous algebraic reconstruction technique (SART) [11], suitable for the implementation in the industrially produced medical X-Ray lamingoraphy equipment and to compare the performance of the X-Ray laminography with the possibilities provided by the digital tomosynthesis. The SART algorithms are among the most efficient in the digital tomosynthesis [12]. ...
... The tomosynthesis algorithm was based on the ideas from [11] and utilized the SART. It was implemented in C++ using Qt, Vtk and openMP open-source libraries. ...
Conference Paper
The experimental comparison of the performance of the medical X-Ray laminography and digital tomosynthesis algorithms was carried out. The results of comparison show the promising future of the integration of the digital tomosynthesis algorithms into the currently produced medical X-ray laminography complexes.
... If there is no contribution of the ray to the voxel at all, w vp amounts to zero. The calculation of w vp is treated in [12]. ...
... The reconstructed volume contains 200 × 200 × 40 voxels. We executed the iterative reconstruction algorithm SART [12] up to the 12th iteration step. After each iteration step, we evaluated the whole volume providing the three materials of which the test object consists: m 1 = 0.0, ...
Article
Full-text available
An important part of computed tomography is the calculation of a three-dimensional reconstruction of an object from series of X-ray images. Unfortunately, some applications do not provide sufficient X-ray images. Then, the reconstructed objects no longer truly represent the original. Inside of the volumes, the accuracy seems to vary unpredictably. In this paper, we introduce a novel method to evaluate any reconstruction, voxel by voxel. The evaluation is based on a sophisticated probabilistic handling of the measured X-rays, as well as the inclusion of a priori knowledge about the materials that the object receiving the X-ray examination consists of. For each voxel, the proposed method outputs a numerical value that represents the probability of existence of a predefined material at the position of the voxel while doing X-ray. Such a probabilistic quality measure was lacking so far. In our experiment, false reconstructed areas get detected by their low probability. In exact reconstructed areas, a high probability predominates. Receiver Operating Characteristics not only confirm the reliability of our quality measure but also demonstrate that existing methods are less suitable for evaluating a reconstruction.