Figure 3 - uploaded by Basil Fierz
Content may be subject to copyright.
2.: Memory hierarchy  

2.: Memory hierarchy  

Similar publications

Article
Full-text available
Current replication strategies update replicas by analyzing the data access pattern for a certain period of time. Therefore, the access latency temporarily increases due to delayed responses to changing data access pattern during the interval of updating replicas. Thus, we propose a real-time data replication strategy that can update a replica by r...
Article
Full-text available
Improving braking skills of a rider supported by a real-time training device embedded in the motorcycle represents a possible strategy to deal with safety issues associated with the use of powered two wheelers. A challenging aspect of the braking trainer system is the evaluation of the adherence between tyre and road surface on each wheel. This pap...
Article
Full-text available
Semi-partitioned scheduling is a new approach for allocating tasks on multiprocessor platforms. By splitting some tasks between processors, semi-partitioned scheduling is used to improve processor utilization. In this paper, a new semi-partitioned scheduling algorithm called SS-DRM is proposed for multiprocessor platforms. The scheduling policy use...

Citations

... Another possible approach for suppressing the energy dissipation is the wavelet turbulence technique [62], which adds turbulence features at different scales in the flow and can be implemented as a post-processing step on the GPU [40]. ...
Thesis
Full-text available
The sizes of data are increasing at a very rapid pace in many applications, including medical visualization, physical simulations and industrial scanning. Some of this growth is not only due to the development of high resolution medical and industrial scanners, but also due to the wide availability of high performance graphics processing units (GPUs) which allow for the interactive rendering of large datasets. However, the increase of problem sizes has generally outpaced the increase of onboard GPU memory. At the same time, the resolution of the traditional display systems has not improved significantly compared to the exponential growth of computing power. We have developed a comprehensive approach that tackles the efficiency of the data representation through lattice-based techniques, as well as the visualization capabilities for exploring that data. We have constructed the Immersive Cabin and the Reality Deck facilities, along with a set of visualization techniques, to address the challenge of growing data sizes. In terms of sampling lattices, we have developed a Computational Fluid Dynamics (CFD) simulation framework based on the lattice Boltzmann method and using optimal sampling lattices. Our focus is specifically on the Face-centered Cubic lattice (FCC), which can achieve a stable simulation with only 13 lattice velocities while at the same time improving the sampling efficiency compared to using the traditional Cartesian grid. We demonstrate the resulting fD3Q13 LBM model for use in highly interactive smoke dispersion simulations. The simulation code is coupled with our visualization framework, which includes a high-performance volume renderer and support for virtual reality systems. The volume rendering is further enhanced with a novel LOD scheme for large volume data that allows for mixing the optimal sampling lattices in adjacent levels of the hierarchy with a computationally cheap indexing function. We have developed a visualization framework for the Immersive Cabin which supports the traditional virtual reality components, such as distributed rendering, stereo, tracking, rigid-body physics and sound. The lattice-based visualization and simulation techniques, including the support for mixed-lattice hierarchies, are integrated in the framework and can be combined with the mesh rendering and the rigid-body physics simulation. Based on our experience in the Immersive Cabin, we have designed and constructed the Reality Deck, which is the world's first 1.5 gigapixel immersive display. The Reality Deck contains 416 high resolution LCD monitors in a 4-wall surround layout, and similarly to the Immersive Cabin, uses a unique automatic door that is also a display surface. The graphics are generated on an 18-node cluster with 24 displays connected to each node using the AMD Eyefinity technology. We have extended the Immersive Cabin visualization framework to support the Reality Deck and developed a new gigapixel image renderer targeted at scientific and immersive visualization. We have developed a set of visualization techniques for the exploration of large and complex data in both of our facilities. Conformal Visualization is a novel retargeting approach for partially-enclosed VR environments, such as the Immersive Cabin and the Reality Deck, to allow for the complete visualization of the data even when display surfaces are missing. Our technique uses conformal mapping to ensure that shape is preserved locally under the transformation and we demonstrate its use for the visualization of both mesh and lattice data. In our Frameless Visualization technique, the traditional framebuffer is replaced with reconstruction from a stream of rendered samples. This approach allows smooth user interaction even when rendering on a gigapixel display, such as the Reality Deck, or when using computationally-expensive visualization algorithms. Our system generates low-latency image samples using an optimal sampling lattice in the 2D+time space, as well as importance-driven higher quality samples. Finally, we have developed the Infinite Canvas visualization technique for horizontally-enclosed visual environments. As the user moves through the physical space of the facility, the graphics outside of the field of view are updated to create the illusion of an infinite continuous canvas. The Infinite Canvas has been used for the visual exploration of gigapixel datasets that are an order of magnitude larger than the surface area of the Reality Deck, including very large image collections.
... In Fierz [11]). Unabhängig von der Festkörperphysikseite eines gekoppelten Gesamtverfahrens, verbleibt für die Fluidseite der Simulation daher etwa eine lattice-Größe von 1.4 bis 1 ...