Figure 2 - uploaded by Henry Fuchs
Content may be subject to copyright.
Examples of panoramic image display environments

Examples of panoramic image display environments

Source publication
Article
Full-text available
Conventional projector-based display systems are typically designed around precise and regular configurations of projectors and display surfaces. While this results in rendering simplicity and speed, it also means painstaking construction and ongoing maintenance. In previously published work, we introduced a vision of projector-based displays const...

Contexts in source publication

Context 1
... Trimension [4] uses three overlapping projectors to project images on a rigid cylindrical screen. The light projectors are aligned symmet- rically so that each overlap region is a well-defined rectangle ( Figure 2a). Flight simulators have been using a similar technique for a long time. ...
Context 2
... rear-projection and head-tracking, the CAVE [1, 9] enables interactive and rich panoramic visualizations. The setup is a precise and well designed cube-like structure (Figure 2b). The CAVE assumes that the display surface and projector geometries are known and are fixed a priori in a specific configuration. ...

Similar publications

Article
Full-text available
In this article we evaluate the relevance of 3D visualisation as a research tool for the history of cinemagoing. How does the process of building a 3D model of cinema theatres relate to what we already know about this history? In which ways does the modelling process allow for the synthesis of different types of archived cinema heritage assets? To...

Citations

... Projection mapping deals with altering appearances of 3D objects by projecting light on them using projectors. A large body of literature exists on projection mapping on static objects [10][11][12][13][14][15][16][17][18][19][20]. More recently, dynamic projection mapping (DPM), that allows mapping projected light on moving objects, has received much attention [1][2][3][4][5][6][7][8][21][22][23][24][25][26][27][28][29][30][31][32][33][34]. ...
Article
We present the first method for dynamic projection mapping on rectangular, deformable and stretchable elastic materials using consumer-grade time of flight (ToF) depth cameras. We use a B-Spline patch to model the projection surface without explicitly modeling the deformation. Leveraging the nature of deformable and stretchable materials, we propose an efficient tracking method that can track the boundary of the surface material. This achieves realistic mapping even in the interior of the surface such that the projection appears to be printed on the material. The surface representation is updated in real-time using GPU based computations. Further, we also show that the speed of these updates is limited by the speed of the camera and therefore can be adopted for higher speed cameras as well.
... a.35) and (a.36) are the same as Equations(4)and(11).Khuslen Battulga received a Bachelor and a Master of Business Administration in Management Information Systems from Mongolian University of Science and Technology in 2011 and 2012, respectively. She worked at Millennium Challenge Corporation from 2012 to 2013. ...
Article
"Spatial augmented reality (SAR)" or "projection mapping" projects an image of a virtual object on the surface of a real object. A "view-dependent" display shows the virtual object with its correct appearance for an arbitrary viewer's position. If the virtual object and the real object have different shapes, the virtual object's image to project needs to be correctly distorted according to the viewer's position. Besides, the difference causes the real object's surface to have some empty areas onto which the virtual object is not projected. Such empty areas seriously degrade the viewer's feeling that the virtual object merges into the real world. We propose a method to eliminate undesired empty areas by projecting the real background behind the real object in a view-dependent way. Our method converts a real background's image captured by a fixed camera to an appropriate image for a viewer's position based on homography. This image conversion approximates the background's shape by a plane adjusted for an arbitrary viewer's position. The plane is determined by practical parameters interpolated on a 3D grid space. Consequently, the viewer does not feel the presence of the real object and feels the virtual object merging into the real world.
... A large body of literature exists on projection mapping on static objects [22,[27][28][29][30][31][32][33][35][36][37]. When considering dynamic objects, [2,3,5,7,[10][11][12][13][14][15][16][17][18] handle rigid objects with known geometry in an accurately calibrated non-real-time system. ...
... Park and Park [14] presented an active calibration method where projected and printed pattern corners were employed. Raskar et al. [15] demonstrated a multi-projector display by means of two calibrated cameras. Fiala [16] uses self-identifying patterns for projector calibration to obtain a rectified image composed by multiple projectors. ...
... Apart from this, projections onto segmented non-planar surfaces (edges, curvatures κ 1 , κ 2 = 0) or segmented planes with orientations other than perpendicular to the z-axis of the global CS exhibit distortions. The rectification of geometric complex surfaces was reported on by several researchers, e.g., [2,15,34]. ...
Article
Full-text available
Due to major technological improvements over the last decades and a significant decrease in costs, electronic projection systems show great potential for providing innovative applications across industries. In most cases, projectors are used to display entire images onto contiguous plane surfaces, but with no active consideration of their three-dimensional environment. Since spatial interactive projections promise new possibilities to display information in the manufacturing industry, we developed a practical approach to how common projection systems can be integrated into a working space and interact with their environment. In order to display information in a spatially dependent manner, a projection model was introduced along with a calibration method for mapping. Subsequently, the approach was validated in the context of robot-based optical inspection systems where texture projections are applied onto sheet metal parts as references features, exclusively to designated regions. The results show that accurate region-specific projections were possible within the calibrated projection volume. In addition, the accuracy and computing speed were investigated to identify limitations. Our approach for interactive projections supports the transfer to other application areas, enables us to rethink current manual and automated procedures and processes in which visualized information benefits the task of interest, and provides new functionalities for manufacturing industries.
... Few works can be found where non-planar surfaces are used to calibrate projectors. A work was presented in Raskar et al. (1999), in which in a first step, a pair of cameras were calibrated. Afterward, a grid was projected on a point-bypoint basis, so it was possible to perform stereo-pair triangulation from the images captured by the stereoscopic system, so the spatial coordinates of those points were computed to geometrically define the (non-planar) projection screen. ...
Article
Full-text available
The geometric calibration of projectors is a demanding task in many areas related to computer vision, virtual reality or augmented reality, to name some. Up to date, different methods have been proposed to retrieve the intrinsic and extrinsic parameters of projectors. During the last 20 years, researchers have used cameras as means to calibrate projectors in order to automatize the process. However, this might add: (1) complexity in terms of mathematical formulation; (2) restrictions in terms of camera locations relative to projectors; and (3) additional errors (those due to the camera calibration itself). Most of these camera-based methods make use of planar homographies, and others require an extended calibration process (for both the camera and the projector). In this paper, we present an approach that combines a direct transformation method (DLT) with projected augmented reality to perform an interactive calibration of projectors without the need for cameras. This method is based on non-coplanar points and 2D/3D correspondences, which are interactively established. Intrinsic and extrinsic calibration is achieved in a single step, making use of the DLT. The method rescues old approaches to calibrate projectors, but brings the new capabilities of interactive systems, all integrated in a single software solution. We conduct different experiments by considering two projector setups and two different sets of control points, proving that the accuracy of the method in the real space can be between one and two pixels.
... The paper [2] has used the structured light pattern for the solution of the correspondence problem. [3,4] first solved the problem for a number of points using the Gaussian points projection. Then, they used interpolation methods to create the correspondence between the rest of the points. ...
... Some of the above articles are automated [2][3][4]6,7] and [8] and others [5,[9][10][11][12][13] and [14] require launching by a human operator. A system is called automatic if a usual user can set it up by placing projectors beside each other and running software. ...
Article
Full-text available
Most existing systems for calibrating multi-projector display suffered from several important limitations such as dependence on point of view, restriction on the display surface and moreover the number of projectors and using obtrusive markers. In this paper, a new method for view-independent calibration for multi-projector displays is presented. Given that the calibration problem of a multi-projector display is an optimization problem, we compute the calibration parameters by writing the appropriate energy functions for the geometric calibration phase. In this method, the camera and projector are introduced as a pair. In the first step, calibration of the pair of camera and projector was carried out. After that, the calibration problem of the system decreases to estimation of the number of camera positions relative to each other. In this method, there is no particular shape for the screen. In addition, due to the 3D shape of the screen, this method is view independent and eventually the image can be wallpapered on the screen. According to the tests carried out to evaluate the system, the accuracy of the proposed system is sub-pixel, and as a result, no misalignment is observed by the human eye in the overlapping area of projectors.
... Some other works can be found where non-planar surfaces are used to achieve calibration. One of the first works found in the literature made use of a 3D calibration pattern with spatially-known control points (CPs) and two cameras, which were fully calibrated by means of projective geometry [23]. Other authors incorporate structured light techniques in the process of calibration. ...
... The proposed method is based on projective geometry similar to the one presented in [23], but using one instead of two cameras, and assuming that geometry of the projection surface is a vertical cylinder, a common case in many virtual reality systems. The mathematical background makes use of the Direct Linear Transformation (DLT) model [4,9], which gives both simplicity and robustness to our implementation because the model is easy to implement and the results can be refined through a least square procedure if redundancies (more CPs than the minimum required) are available. ...
Article
Full-text available
In this paper, we propose a fast and easy-to-use projector calibration method needing a minimum set of input data, thus reducing the calibration time. The method is based on the Direct Linear Transformation (DLT) mathematical model, which makes it simple and fully automatic. We show the application of this method on cylindrical surfaces, as well as some real application examples. The results show that with the minimum configuration of 6 control points (CPs), the standard deviation in the projector positioning yielded by the calibration process is less than one per cent of the position values.
... Since a projector is decoupled from a screen, the projector can project an image onto not only a planar surface, but also many types of non-planar surfaces [1,2]. Moreover, one large-scale seamless display can be created and additional information can be effectively superimposed on the display surface by using multiple projectors [2,3]. Over the last few years, projector technology has advanced at a dramatic rate: tiny mobile projectors are now commercially available and a projector has even been embedded in a mobile phone and become applicable to the field of wearable computing [4]. ...
Article
Full-text available
In this paper, we propose a novel interactive multi-resolution display system. The proposed system is based on a projector-mounted mobile robot in an Intelligent Space. The Intelligent Space can calculate the location of the region of interest (ROI) by recognizing the user's pointing gesture. The steerable projector mounted on the mobile robot can improve the brightness and resolution of the ROI of a large image projected by a stationary projector installed in the Intelligent Space. In the proposed system, the user is not required to hold any apparatuses for interacting with the display. Additionally, the proposed system is easy to use because it is designed with the natural and intuitive hand movement of user in mind. In the experiments, we demonstrate the feasibility of the proposed system.
... Many existing methods have attempted to recover the 3D geometry of the screens to approximate the transformations [1], but doing so usually requires a substantial amount of manual interaction. There are techniques for camera-based reconstruction of nonplanar multi-projector displays [5,10], however, they also require many manual steps to achieve an accurate calibration, and must be repeated from the start if the display is physically disturbed. The Automatic Screen Calibration method utilized in our demonstration can automatically recover the display-to-screen transformations on a spherical FTVR requiring only a calibrated camera. ...
Conference Paper
Full-text available
We present cubic and spherical multi-screen fish tank virtual reality displays that use novel interactive and automatic calibration techniques to achieve convincing 3D effects. The two displays contrast the challenges and benefits of multiple projectors or flat-panel screens, borders or borderless, and the performance of headtrackers. Individuals will be able to subjectively evaluate the visual fidelity of the displays by comparing physical objects to their virtual counterparts, comparing the two displays, and by changing the level of calibration accuracy. They will be able to test the first markerless, interactive, and user-dependent headtracker calibration that promises accurate viewpoint registration without the need for manual measurements. In conjunction with an automatic screen calibration technique, the displays will offer a unique and convincing 3D experience.
... An early work on non-planar calibration was pulished by Raskar et al. [13], in which a stereo camera pair is used to recover projector-camera parameters and reconstruct non-planar surface. They achieved registration using both structure light and additional surfaces. ...
... The rear-projection through a small projection hole makes calibration difficult since the view of camera is mostly blocked by the edge of small hole. To overcome those problems, we propose a calibration pipeline similar to Raskar's work [13,14] but with modifications to make it adapted to our system. This will be discussed in details in the display calibration section. ...
... We calibrate camera C and projectors P using a plane-based calibration approach [22], which is essentially an extension of Zhang's calibration technique [23] for camera-projector system. With the removal of the spherical screen, the planar pattern placed at different positions and orientations is used to avoid degenerate case for projector calibration [13]. After this step, the intrinsic and extrinsic parameters for camera and projectors are recovered. ...
Conference Paper
Full-text available
We describe the design, implementation and detailed visual error analysis of a 3D perspective-corrected spherical display that uses calibrated, multiple rear projected pico-projectors. The display system is calibrated via 3D reconstruction using a single inexpensive camera, which enables both view-independent and view-dependent applications, also known as, Fish Tank Virtual Reality (FTVR). We perform error analysis of the system in terms of display calibration error and head-tracking error using a mathematical model. We found: head tracking error causes significantly more eye angular error than display calibration error; angular error becomes more sensitive to tracking error when the viewer moves closer to the sphere; and angular error is sensitive to the distance between the virtual object and its corresponding pixel on the surface. Taken together, these results provide practical guidelines for building a spherical FTVR display and can be applied to other configurations of geometric displays.