Figure - available from: Optics Express
This content is subject to copyright. Terms and conditions apply.
(a) Grid of the array of microlenses in the case of single snapshot FInI; (b) Grid of the synthetic array in the case of the double snapshot.

(a) Grid of the array of microlenses in the case of single snapshot FInI; (b) Grid of the synthetic array in the case of the double snapshot.

Source publication
Article
Full-text available
In multi-view three-dimensional imaging, to capture the elemental images of distant objects, the use of a field-like lens that projects the reference plane onto the microlens array is necessary. In this case, the spatial resolution of reconstructed images is equal to the spatial density of microlenses in the array. In this paper we report a simple...

Similar publications

Article
Full-text available
The ZaP Flow Z-Pinch experiment investigates how flow shear stabilizes MHD modes. An upgrade to a high energy-density plasma experiment would allow exploration of flow shear's effectiveness in this operating regime. The experiment's upgrade would include the addition of a digital holographic interferometer to measure electron density with fine spat...
Article
Full-text available
We developed a human-scale single-ring OpenPET (SROP) system, which had an open space allowing us access to the subject during measurement. The SROP system consisted of 160 4-layer depth-of-interaction detectors. The open space with the axial width of 430 mm was achieved with the ring axial width of 214 mm and the ring inner diameter of 660 mm. The...
Conference Paper
Full-text available
This paper presents the method for generating a refocus image with restored spatial resolution on a plenoptic camera, which functions controlling the depth of field after capturing one image unlike a traditional camera. It is generally known that the camera captures 4D light field (angular and spatial information of light) within a limited 2D senso...
Article
Full-text available
Angle sensors are widely used for wavefront measurements, which is attributed to their integration and robustness. Currently, commercial sensors are available with pixel sizes in the order of wavelengths. However, the spatial resolution of angle sensors still lags far behind. Here, we report a one-dimensional, high-resolution wavefront sensor. It w...

Citations

... Viewing angular range, depth of field (DOF), clarity, continuity, and resolution are crucial metrics to evaluate the performance of the 3D LFDs [14][15][16][17]. A high-quality 3D LFD system requires to have a wide viewing angular range, a large display depth, high clarity, smooth continuity, and high resolution simultaneously. ...
... These indicators quantify the disparity between the processed image and the ideal image, with smaller disparities indicating greater restoration achieved by the method employed. The numerical values of the evaluation indicators demonstrate the improvement in image quality achieved by the proposed method [32][33][34][35][36][37][38]. Table 1 provides a comprehensive summary of the performance of three image quality evaluation metrics, namely, SSIM, PSNR, and VIF [31], with a focus on the central perspective of each method. ...
Article
Full-text available
This paper proposes a method that utilizes a dual neural network model to address the challenges posed by aberration in the integral imaging microlens array (MLA) and the degradation of 3D image quality. The approach involves a cascaded dual convolutional neural network (CNN) model designed to handle aberration pre-correction and image quality restoration tasks. By training these models end-to-end, the MLA aberration is corrected effectively and the image quality of integral imaging is enhanced. The feasibility of the proposed method is validated through simulations and optical experiments, using an optimized, high-quality pre-corrected element image array (EIA) as the image source for 3D display. The proposed method achieves high-quality integral imaging 3D display by alleviating the contradiction between MLA aberration and 3D image resolution reduction caused by system noise without introducing additional complexity to the display system.
... E merging technologies such as autonomous vehicles demand imaging technologies that can capture not only a 2D image but also the 3D spatial position and orientation of objects. Multiple solutions have been proposed, including LiDAR systems 1-3 and light-field cameras [4][5][6][7] , though existing approaches suffer from significant limitations. For example, LiDAR is constrained by size and cost, and most importantly requires active illumination of the scene using a laser, which poses challenges of its own, including safety. ...
... Light-field cameras of various configurations have also been proposed and tested. A common approach uses a microlens array in front of the sensor array of a camera 4,5 ; light emitted from the same point with different angles is then mapped to different pixels to create angular information. However, the mapping to a lower dimension carries a tradeoff between spatial and angular resolution. ...
Article
Full-text available
Recent years have seen the rapid growth of new approaches to optical imaging, with an emphasis on extracting three-dimensional (3D) information from what is normally a two-dimensional (2D) image capture. Perhaps most importantly, the rise of computational imaging enables both new physical layouts of optical components and new algorithms to be implemented. This paper concerns the convergence of two advances: the development of a transparent focal stack imaging system using graphene photodetector arrays, and the rapid expansion of the capabilities of machine learning including the development of powerful neural networks. This paper demonstrates 3D tracking of point-like objects with multilayer feedforward neural networks and the extension to tracking positions of multi-point objects. Computer simulations further demonstrate how this optical system can track extended objects in 3D, highlighting the promise of combining nanophotonic devices, new optical system designs, and machine learning for new frontiers in 3D imaging.
... Emerging technologies such as autonomous vehicles demand imaging technologies that can capture not only a 2D image but also the 3D spatial position and orientation of objects. Multiple solutions have been proposed, including LiDAR systems 1,2,3 and light-eld cameras [4][5][6][7] , though existing approaches suffer from signi cant limitations. For example, LiDAR is constrained by size and cost, and most importantly requires active illumination of the scene using a laser, which poses challenges of its own, including safety. ...
Preprint
Full-text available
Recent years have seen the rapid growth of new approaches to optical imaging, with an emphasis on extracting three-dimensional (3D) information from what is normally a two-dimensional (2D) image capture. Perhaps most importantly, the rise of computational imaging, defined as the synergistic design of optical systems in conjunction with image reconstruction algorithms, enables both new physical layouts of optical components and new algorithms to be implemented. This paper concerns the convergence of two advances: the development of transparent photodetectors with high responsivity, and the rapid expansion of the capabilities of machine learning including the development of powerful neural networks. In particular, we demonstrate that the use of transparent photodetector arrays stacked vertically along the optical axis of an imaging system, called a focal stack, together with a feedforward neural network, provides a powerful new approach to real-time 3D optical imaging including object tracking. The focal stack imaging system is realized through the development of graphene transparent photodetector arrays. As a proof-of concept, 3D tracking of point-like objects was successfully demonstrated with multilayer feedforward neural networks, which was then extended for tracking of multi-point objects in position. Our computer model further demonstrates how this optical system can track extended objects in 3D, highlighting the promise of combining nanophotonic devices, new optical system designs, and machine learning for new frontiers in 3D imaging.
... In the RPII using a lens array, the situation is the opposite. The spatial resolution mainly depends on the EI resolution, while the angular resolution depends on the lens number [22]. Thus, the common method to increase spatial resolution without decreasing angular resolution is to increase the effective pixel number of the display device by time-multiplexing or spatio-multiplexing method [14]. ...
Article
Full-text available
In this paper, two different display modes, the “pinhole mode” and the “lens mode” of the pinhole-type integral imaging (PII) based hologram are demonstrated by proper use of random phase. The performances of resolution, fill factor and image depth, of the two display modes are analyzed. Two different methods, the moving array lenslet technique (MALT) and the high-resolution elemental image array (EIA) encoding are introduced for the spatial resolution enhancement of the two display modes, respectively. Both methods enhance the spatial resolution without increasing the total pixel number or the space-bandwidth product (SBP) of the hologram. Both simulation and optical experiments verify that the proposed methods enhance the spatial resolution of PII-based hologram at a very low cost.
... In the implemented LF imaging system, virtual synthetic image planes are achievable for the depth-refocusing over depths ranging from 25 cm to 350 cm, as shown in Fig. 5(d) and supplementary video, Video S2. Despite the unique functionality of the switchable image acquisitions between the high-resolution 2D and 3D LF modes, owing to the excellent optical property of the elemental image acquisition and its ideal switching capability of the demonstrated PMLA, the experimental reconstruction results of the presented LF camera as well as the characteristics listed Table II showed good features comparable with those of the previously reported LF cameras constructed with the passive-type MLA enabling 3D LF imaging only [42], [43]. ...
Article
Full-text available
We propose a time-sequential switching light-field (LF) camera for alternative image capture of high-resolution two-dimensional (2D) images and three-dimensional (3D) LF elemental images as additional functionalities. For image data acquisitions of both the 2D and 3D LF imaging of moving objects at a video-frame rate (or even higher frame rate up to approximately 1000 fps), a polarization-dependent-switching micro-lens array (PMLA) is implemented in the LF camera system instead of a conventional passive-type MLA. By controlling the incident polarization conditions using an electrically fast-switching liquid crystal layer, the imaging mode can be time-sequentially switched quite rapidly (switching times of approximately 220 μs and 290 μs for the mode conversions from the 3D LF to the 2D mode, and the reversal mode change, respectively). Using the elemental image sets sampled from the alternating time-sequential imaging results, either directional-view images or depth-refocused images can be reconstructed and provided at a moving picture frame-rate. The depth-refocused images are possible for a wide depth range from 25 cm to 350 cm. Directional views of 22 x 22 and 9 x 9 in portions can be reconstructed for the single shot image capture and the time-sequential video-rate image capture, respectively.
... Integral imaging 3D display is a promising 3D display technology, which was first proposed in 1908 [1][2][3]. It has the advantages of continuous viewpoint, full parallax, and no special equipment [4][5][6]. But there are some disadvantages that need to be solved before its commercialization, such as the depth limitation of the rebuilt 3D image, the narrow viewing angle, the pseudoscopic issue, the 2D/3D convertible issue, and so on. ...
Article
Full-text available
An integral imaging-based 2D/3D convertible display system is proposed by using a lens-array holographic optical element (LAHOE), a polymer dispersed liquid crystal (PDLC) film, and a projector. The LAHOE is closely attached to the PDLC film to constitute a projection screen. The LAHOE is used to realize integral imaging 3D display. When the PDLC film with an applied voltage is in the transparent state, the projector projects a Bragg matched 3D image, and the display system works in 3D mode. When the PDLC film without an applied voltage is in the scattering state, the projector projects a 2D image, and the display system works in 2D mode. A prototype of the integral imaging-based 2D/3D convertible display is developed, and it provides 2D/3D convertible images properly.
... Most research work conducted on Holoscopic 3D depth estimation techniques have been aimed at overcoming limitations related to H3DI reconstruction, and the determination of viewing parameters like depth of field Kim et al. [12], and image quality of displayed images Martnez-Cuenca et al. [13]. The knowledge of holoscopic image depth or spatial position is one of the key focus in contemporary digital imaging applications [3,14,15], and its accuracy is used to improve a wide range of technical issues, such as coding and transmission. The advantages of 3D image depth determination in holoscopic digital image processing are closely related to the application of H3DI in fields like 3D cinema, robotic vision, medical imaging, detection and tracking of people, enhancing biometrics and in video games image generation [16,34]. ...
Preprint
Holoscopic 3D imaging is a promising technique for capturing full colour spatial 3D images using a single aperture holoscopic 3D camera. It mimics fly's eye technique with a microlens array, which views the scene at a slightly different angle to its adjacent lens that records three dimensional information onto a two dimensional surface. This paper proposes a method of depth map generation from a holoscopic 3D image based on graph cut technique. The principal objective of this study is to estimate the depth information presented in a holoscopic 3D image with high precision. As such, depth map extraction is measured from a single still holoscopic 3D image which consists of multiple viewpoint images. The viewpoints are extracted and utilised for disparity calculation via disparity space image technique and pixels displacement is measured with sub pixel accuracy to overcome the issue of the narrow baseline between the viewpoint images for stereo matching. In addition, cost aggregation is used to correlate the matching costs within a particular neighbouring region using sum of absolute difference SAD combined with gradient-based metric and winner takes all algorithm is employed to select the minimum elements in the array as optimal disparity value. Finally, the optimal depth map is obtained using graph cut technique. The proposed method extends the utilisation of holoscopic 3D imaging system and enables the expansion of the technology for various applications of autonomous robotics, medical, inspection, AR VR, security and entertainment where 3D depth sensing and measurement are a concern.
... To optimize these system parameters, integral imaging analysis has been reported in Refs. [21][22][23][24][25][26][27][28][29][30][31][32]. However, in previous research, there have still been trade-offs between lateral resolution and the DoF. ...
... In this paper, we propose a new 3D integral imaging technique that can alleviate these resolution constraints in integral imaging. In our proposed method, we try to overcome the resolution trade-off by using non-uniform [30][31][32] system parameters, such as the f -number of the lens and sensor size. In addition, we optimize the location of the reference plane by using the hyperfocal distance as the image distance for the lenslets. ...
... The two different schemes used in integral imaging (RPII and DPII) can only improve either the lateral resolution or the DoF. Although some optimization of 3D resolution has been addressed by several studies [8][9][10][30][31][32], this resolution trade-off has not yet been overcome. ...
Article
Full-text available
In this paper, we propose a new 3D passive image sensing and visualization technique to improve lateral resolution and depth of field (DoF) of integral imaging simultaneously. There is a resolution trade-off between lateral resolution and DoF in integral imaging. To overcome this issue, a large aperture and a small aperture can be used to record the elemental images to reduce the diffraction effect and extend the DoF, respectively. Therefore, in this paper, we utilize these two pickup concepts with a non-uniform camera array. To show the feasibility of our proposed method, we implement an optical experiment. For comparison in details, we calculate the peak signal-to-noise ratio (PSNR) as the performance metric.
... In the conventional realization of an integral monitor, just behind each microlens it is displayed a microimage so that both the microlens and the microimage have the same size. The microimages can be obtained directly with a plenoptic camera or transformed from the elemental images captured with an array of digital cameras [13,[16][17][18][19] . ...
Article
We propose a new method for improving the observer experience when using an integral monitor. Our method permits to increase the viewing angle of the integral monitor, and also the maximum parallax that can be displayed. Additionally, it is possible to decide which parts of the 3D scene are displayed in front or behind the monitor. Our method is based, first, in the direct capture, with significant excess of parallax, of elemental images of 3D real scenes. From them, a collection of microimages adapted to the observer lateral and depth position is calculated. Finally, an eye-tracking system permits to determine the 3D observer position, and therefore to display the adequate microimages set. Summarizing, it is reported here, for the first time we believe, the application of eye-tracking technology to the display of integral images of 3D real scenes with bright background. Although we are reporting here only a proof-of-concept experiment, this result could have direct application in a close future for the broadcasting of 3D videos recorded in professional studio, for videoconferences or for on-line professional meetings.