Figure - available from: Applied Optics
This content is subject to copyright. Terms and conditions apply.
(a) Fiber bundle structure and defects lead to image artifacts. Fiber absorbers between every other 2.5 μm fiber intersection prevent crosstalk. (b) OVT5653 sensor is used to sample the image exiting the fiber bundle. The two magnified inset micrographs are of the same scale.

(a) Fiber bundle structure and defects lead to image artifacts. Fiber absorbers between every other 2.5 μm fiber intersection prevent crosstalk. (b) OVT5653 sensor is used to sample the image exiting the fiber bundle. The two magnified inset micrographs are of the same scale.

Source publication
Article
Full-text available
Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to coup...

Citations

... Due to the utilization of fiber bundles (FB) in pCLE, the spatial resolution of the FB imaging system is constrained by the core diameter and density of the optical fibers [6]. Additionally, variations in the light transmission properties of individual fibers and their surrounding sheaths result in the formation of honeycomb patterns, which superimpose on the imaging results and hinder the precise analysis of objects [7]. Over the past few years, various methods have been developed to eliminate honeycomb patterns, such as employing bandpass filters in the Fourier domain [8][9][10][11]. ...
Article
Full-text available
Probe-based confocal laser endoscopy (pCLE) has emerged as a powerful tool for disease diagnosis, yet it faces challenges such as the formation of hexagonal patterns in images due to the inherent characteristics of fiber bundles. Recent advancements in deep learning offer promise in image denoising, but the acquisition of clean-noisy image pairs for training networks across all potential scenarios can be prohibitively costly. Few studies have explored training denoising networks on such pairs. Here, we propose an innovative self-supervised denoising method. Our approach integrates noise prediction networks, image quality assessment networks, and denoising networks in a collaborative, jointly trained manner. Compared to prior self-supervised denoising methods, our approach yields superior results on pCLE images and fluorescence microscopy images. In summary, our novel self-supervised denoising technique enhances image quality in pCLE diagnosis by leveraging the synergy of noise prediction, image quality assessment, and denoising networks, surpassing previous methods on both pCLE and fluorescence microscopy images.
... By turning the incident-exit end face of the FOP into a concave spheric-plane and coupling it directly with the hemispherical focal plane of the monocentric objective lens and the surface of the image sensor as a relay image transmission device, the issue that the focal plane of the monocentric objective lens cannot be coupled with the sensitive surface of the sensor can be solved perfectly. In order to produce a large-field image with hundreds of millions or even billions of pixels [5], the several FOPs coupled large-field array space-class image sensors are spliced through a particular spatial arrangement that takes into account the large field of view and high resolution imaging [4,[6][7][8][9]. ...
Article
Full-text available
The monocentric camera based on fiber relay imaging offers benefits of light weight, compact size envelope, vast field of view, and high resolution, which can fully fulfill the index requirements of space-based surveillance systems. However, the fiber optic plate's (FOP) defects will result in the loss of imaging data, and the FOP's discrete structural features will exacerbate the imaging's non-uniformity. A global defect detection approach based on manual threshold segmentation of saturated frames is suggested to detect FOP defect features. The suggested method's efficacy and accuracy are confirmed when compared to the classical Otsu algorithm. Additionally, through tests, the relative imaging response coefficients of each pixel are identified, the response non-uniformity of the pixels is corrected, and the whole image non-uniformity drops from 10.01% to 0.78%. The study in this paper expedites the use of fiber relay imaging-based monocentric cameras in the field of space-based surveillance, and the technique described in this paper is also appropriate for large-array optical fiber coupled relay image transmission systems.
... With concave surfaces, the optical systems can be simplified dramatically as there are no needs to correct field curvature, which is one of the driving factors for complex optical system. [23][24][25][26] In this paper, we will first introduce the concept of compact, HSHI with 3D printed glass lightguide array, and then demonstrate the system performance with mask pattern and biological slides. We will also discuss the future research and potential applications. ...
... Bio-inspired compact, high-resolution snapshot hyperspectral imaging (HSHI) system. a) Schematic configuration of HSHI with lightguide array, b) eye structure of funnel-web spider Agelena labyrinthica, [26] c) schematic diagram of an apposition eye, [27] d) lightguide array with lenses and dispersion prism in the output end for hyperspectral imaging, e) lightguide array with lens in the input end and lenses and prism in the output end for hyperspectral imaging, f) lightguide array with concave input surface and convex output surface, g) lightguide array with concave input and output surfaces, and h) lightguide array with flat input and output surfaces. www.advopticalmat.de ...
Article
Full-text available
To address the major challenges to obtain high spatial resolution in snapshot hyperspectral imaging, 3D printed glass lightguide array is developed to sample the intermediate image in high spatial resolution and redistribute the pixels in the output end to achieve high spectral resolution. Curved 3D printed lightguide array can significantly simplify the snapshot hyperspectral imaging system, achieve better imaging performance, and reduce the system complexity and cost. Two‐photon polymerization process to print glass lightguide array is developed, and the system performance with biological samples is demonstrated. This new snapshot technology will catalyze new hyperspectral imaging system development and open doors for new applications from UV to infrared.
... The monocentric system has a FoV of 140 • , 7.88 mm focal length, 14.47 mm total length, and F/1.5. A monocentric lens with multi-aperture integration was proposed (Fig. 7(c)) [179] in 2015. This optical system has the characteristics of all optical surfaces are spherical and share a common center of curvature. ...
... The monocentric lens with compact imager volume has no coma or astigmatic aberration. A F/1.35, 30 megapixels, 126 • fiber-coupled monocentric lens imager prototype greatly reduces the volume of the wide-FoV imaging system compared to a commercial F/4 camera ( Fig. 7(d)) [179]. The image processing methodology can significantly improve the quality of the fiber relayed prototype image, as illustrated in Fig. 7(e) [179]. ...
... A F/1.35, 30 megapixels, 126 • fiber-coupled monocentric lens imager prototype greatly reduces the volume of the wide-FoV imaging system compared to a commercial F/4 camera ( Fig. 7(d)) [179]. The image processing methodology can significantly improve the quality of the fiber relayed prototype image, as illustrated in Fig. 7(e) [179]. A gigapixel monocentric multiscale imager has been shown to integrate a two-dimensional mosaic of subimages [59], [188], [189]. ...
Article
Full-text available
With the rapid development of high-speed communication and artificial intelligence technologies, human perception of real-world scenes is no longer limited to the use of small Field of View (FoV) and low-dimensional scene detection devices. Panoramic imaging emerges as the next generation of innovative intelligent instruments for environmental perception and measurement. However, while satisfying the need for large-FoV photographic imaging, panoramic imaging instruments are expected to have high resolution, no blind area, miniaturization, and multidimensional intelligent perception, and can be combined with artificial intelligence methods towards the next generation of intelligent instruments, enabling deeper understanding and more holistic perception of 360° real-world surrounding environments. Fortunately, recent advances in freeform surfaces, thin-plate optics, and metasurfaces provide innovative approaches to address human perception of the environment, offering promising ideas beyond conventional optical imaging. In this review, we begin with introducing the basic principles of panoramic imaging systems, and then describe the architectures, features, and functions of various panoramic imaging systems. Afterwards, we discuss in detail the broad application prospects and great design potential of freeform surfaces, thin-plate optics, and metasurfaces in panoramic imaging. We then provide a detailed analysis on how these techniques can help enhance the performance of panoramic imaging systems. We further offer a detailed analysis of applications of panoramic imaging in scene understanding for autonomous driving and robotics, spanning panoramic semantic image segmentation, panoramic depth estimation, panoramic visual localization, and so on. Finally, we cast a perspective on future potential and research directions for panoramic imaging instruments.
... With concave surfaces, the optical systems can be simplified dramatically as there are no needs to correct field curvature, which is one of the driving factors for complex optical system. [23][24][25][26] In this paper, we will first introduce the concept of compact, high-resolution snapshot hyperspectral imaging, and then demonstrate the system performance with 3D printed glass lightguide array. We will also discuss the future research and potential applications. ...
... Bio-inspired compact, high-resolution snapshot hyperspectral imaging (HSHI) system. (a) Schematic configuration of HSHI with lightguide array, (b) eye structure of funnel-web spider Agelena labyrinthica,26 (c) schematic diagram of an apposition eye,27 (d) lightguide array with lenses and dispersion prism in the output end for hyperspectral imaging, (e) lightguide array with lens in the input end and lenses and prism in the output end for hyperspectral imaging, (f) lightguide array with concave input surface and convex output surface, (g) lightguide array with concave input and output surfaces, and (h) lightguide array with flat input and output surfaces. ...
Preprint
Full-text available
To address the major challenge to obtain high spatial solution in snapshot hyperspectral imaging, 3D printed glass lightguide array has been developed to sample the intermediate image in high spatial resolution and re-distribute the pixels in the output end to achieve high spectral resolution. Curved 3D printed lightguide array can significantly simplify the snapshot hyperspectral imaging system, achieve better imaging performance, reduce the system complex and cost. We have developed two-photon polymerization process to print glass lightguide array, and demonstrated the system performance with biological samples. This new snapshot technology will catalyze new hyperspectral imaging system development and open doors for new applications from UV to IR.
... The monocentric system has a FoV of 140 • , 7.88 mm focal length, 14.47 mm total length, and F/1.5. A monocentric lens with multi-aperture integration was proposed (Fig. 7(c)) [179] in 2015. This optical system has the characteristics of all optical surfaces are spherical and share a common center of curvature. ...
... The monocentric lens with compact imager volume has no coma or astigmatic aberration. A F/1.35, 30 megapixels, 126 • fiber-coupled monocentric lens imager prototype greatly reduces the volume of the wide-FoV imaging system compared to a commercial F/4 camera ( Fig. 7(d)) [179]. The image processing methodology can significantly improve the quality of the fiber relayed prototype image, as illustrated in Fig. 7(e) [179]. ...
... A F/1.35, 30 megapixels, 126 • fiber-coupled monocentric lens imager prototype greatly reduces the volume of the wide-FoV imaging system compared to a commercial F/4 camera ( Fig. 7(d)) [179]. The image processing methodology can significantly improve the quality of the fiber relayed prototype image, as illustrated in Fig. 7(e) [179]. A gigapixel monocentric multiscale imager has been shown to integrate a two-dimensional mosaic of subimages [59], [188], [189]. ...
Preprint
Full-text available
With the rapid development of high-speed communication and artificial intelligence technologies, human perception of real-world scenes is no longer limited to the use of small Field of View (FoV) and low-dimensional scene detection devices. Panoramic imaging emerges as the next generation of innovative intelligent instruments for environmental perception and measurement. However, while satisfying the need for large-FoV photographic imaging, panoramic imaging instruments are expected to have high resolution, no blind area, miniaturization, and multi-dimensional intelligent perception, and can be combined with artificial intelligence methods towards the next generation of intelligent instruments, enabling deeper understanding and more holistic perception of 360-degree real-world surrounding environments. Fortunately, recent advances in freeform surfaces, thin-plate optics, and metasurfaces provide innovative approaches to address human perception of the environment, offering promising ideas beyond conventional optical imaging. In this review, we begin with introducing the basic principles of panoramic imaging systems, and then describe the architectures, features, and functions of various panoramic imaging systems. Afterwards, we discuss in detail the broad application prospects and great design potential of freeform surfaces, thin-plate optics, and metasurfaces in panoramic imaging. We then provide a detailed analysis on how these techniques can help enhance the performance of panoramic imaging systems. We further offer a detailed analysis of applications of panoramic imaging in scene understanding for autonomous driving and robotics, spanning panoramic semantic image segmentation, panoramic depth estimation, panoramic visual localization, and so on. Finally, we cast a perspective on future potential and research directions for panoramic imaging instruments.
... I. Stamenov and team at the University of California, San Diego, designed an optical system with two-glass symmetric monocentric lenses coupled with fiber bundles [9] and developed an ultra-wide-angle camera using the structure of concentric lens-coupled fiber-optic panels with a camera field of view of 126 • × 16 • , a transfer function larger than 0.4 in the full field of view at 200 lp/mm, and a billion pixels [10]. Stephen J. [11] studied the methods to mitigate moiré artifacts and local obscuration of fiber bundles. Jianbo Shao [12] proposed a restoration method to remove honeycomb patterns and improve resolution for fiber bundle images. ...
Article
Full-text available
In this study, we developed a numerical model of the coupled modulation transfer function (coupled-MTF) based on the discrete sampling system from the perspective of optical system imaging quality evaluation for coupled two-dimensional discrete sampling characteristics of the hexagonally aligned fiber-optic imaging bundles and CCD image elements. The results show that when the spatial frequency of the input target signal deviates from the Nyquist frequency by 1%, an increase in the number of fibers leads to a faster convergence of the oscillation of the coupled-MTF, and the coupled-MTF converges to a stable value when the number of fibers reaches 1000 × 1000. The deviation of the spatial frequency of the input target signal from the Nyquist frequency is within 1%, and the oscillatory convergence of the coupled-MTF accelerates with increasing deviation. The coupled-MTF oscillates with the deviation period of the wave peak of the input target signal from the initial position of the fiber center, and the theoretical oscillation spatial period is twice the fiber diameter. This study produces important guidelines for the selection of the number of fibers, input spatial frequency, and initial position deviation of the hexagonally arranged fiber imaging bundles.
... Monocentric multiscale cameras [7][8][9][10][11][12][13], artificial compound eyes [14][15][16][17][18][19][20], and panoramic monocentric fiber-coupled imagers [21][22][23][24][25] can obtain both high spatial resolution and ultrawide field of view, but none of them can obtain high-resolution spectral information. ...
... Figure 2(a) shows the cross-sectional view of a single fiber bundle containing seven optical fibers. The coupling end of each fiber bundle is the heavily fused seven hexagonal cores, and the cladding thickness of each core can be reduced to much less than 2.5 µm over the short fuse length [23,52]. Figure 2(b) shows the equivalent schematic diagram of a single fiber bundle coupled with a separate microlens of the first MLA and each optical fiber coupled with a separate microlens of the second MLA. ...
Article
Full-text available
A broadband high-spatial-resolution high-spectral-resolution flexible imaging spectrometer (B-2HSR-FIS) is presented, which includes two microlens arrays (MLAs), multiple fiber bundles, a scanning Fabry–Perot interferometer (FPI), a reflection grating, a cylindrical lens, and an area-array detector. The first MLA is arranged in a circular arc to obtain a field angle between 8° and 60° in the horizontal plane. The second MLA is arranged in a straight line. Each fiber bundle containing seven optical fibers is coupled to a separate microlens of the first MLA, subdividing the field angle of each microlens into seven smaller field angles to improve spatial resolution. The combination of a scanning FPI and a reflection grating enables the B-2HSR-FIS to obtain both high spectral resolution and broadband spectral range in the ultraviolet to near-infrared spectral region. Compared with all existing imaging spectrometers, the B-2HSR-FIS is the first to simultaneously obtain high spatial resolution, high spectral resolution, broadband spectral range, and moderate field angle, to the best of our knowledge. The B-2HSR-FIS has great potential for vision intelligence (e.g., as an eye of a robot).
... Desire to increase performance or reduce SWaC of imaging systems has motivated continuous efforts to develop new and improved optical technologies. In the last decade, developments in lens technology has spanned metasurface lenses [5][6][7][8][9][10], diffractive optical elements [11], dynamic gradient-index materials [12][13][14], freeform surfaces [15][16][17], transformation optics [18,19], printed lenses [20], highly-aspheric molded plastics [21,22] and technologies to utilize curved image surfaces [23][24][25][26][27][28][29][30][31]. Each of these are vying to improve optical systems and have demonstrated compelling advancements over conventional optics. ...
Article
Full-text available
A new metric for imaging systems, the volumetric imaging efficiency (VIE), is introduced. It compares the compactness and capacity of an imager against fundamental limits imposed by diffraction. Two models are proposed for this fundamental limit based on an idealized thin-lens and the optical volume required to form diffraction-limited images. The VIE is computed for 2,871 lens designs and plotted as a function of FOV; this quantifies the challenge of creating compact, wide FOV lenses. We identify an empirical limit to the VIE given by VIE < 0.920 × 10−0.582×FOV when using conventional bulk optics imaging onto a flat sensor. We evaluate VIE for lenses employing curved image surfaces and planar, monochromatic metasurfaces to show that these new optical technologies can surpass the limit of conventional lenses and yield >100x increase in VIE.
... In another aspect, an ultra-wide FOV is very important for wide-angle search requirements especially in surveillance and reconnaissance. At present, the ultra-wide FOV for optical sensors is realized by four methods: (1) a fisheye lens [11][12][13][14], a catadioptric panoramic lens [15][16][17][18][19], or both [20,21]; (2) Monocentric multiscale cameras [22][23][24][25][26][27][28], which separates a lens system into a single primary lens and multiple small-scale secondary lens arrays; (3) Artificial compound eyes [29][30][31][32][33][34][35]; (4) Panoramic monocentric fiber-coupled imagers [36][37][38][39][40], which combine a panoramic monocentric lens with fiber bundles. However, none of the four methods mentioned above can obtain high-resolution spectral information. ...
... D F is the cross-sectional size of the core of a single optical fiber, D FB is the cross-sectional size of a single fiber bundle, and θ in is the angle of the cone of light entering each optical fiber core. Each fiber bundle contains nineteen (or more) hexagonal optical fibers, and one end of each fiber bundle is the heavily fused hexagonal cores as shown in Fig. 2(a) (the cladding thickness can be reduced to much less than 2.5 µm over the short fuse length [38]), which enables the field angle of each microlens of the MLA to be subdivided into nineteen (or more) smaller field angles to improve angular resolution. The curved fiber bundle as shown in Fig. 2(b) is used to obtain uniformly efficient coupling across the field of view of each microlens of the MLA [36], which makes θ in of each optical fiber almost equal so that θ o and d x of each optical fiber almost equal. ...
... Second, because the cladding of the optical fiber has a certain thickness, the field of view is not completely continuous. However, the cladding thickness can be reduced to much less than 2.5 µm over the short fuse length [38], and the field angle of each microlens of the MLA is usually small and subdivided into multiple smaller field angles, so the field of view coverage is still enough to obtain high spatial resolution. If each fiber bundle contains nineteen optical fibers as shown in Fig. 2(a), the field of view coverage can be approximately calculated by ...
Article
Full-text available
An ultra-wide-angle high-spatial-resolution high-spectral-resolution snapshot imaging spectrometer (UWA-2HSR-SIS) is presented, which comprises a microlens array (MLA), multiple fiber bundles, a micro-cylindrical-lens array (MCLA), a cylindrical lens, a static grating interferometer (SGI), and an area-array detector. The MLA is arranged in a circular arc of 120° or more. The MCLA is arranged in a straight line. The SGI includes a fixed reflection grating in Littrow configuration, a beam splitter, and a fixed plane mirror. Each fiber bundle containing multiple optical fibers is coupled to a separate microlens of the MLA, subdividing the field angle of each microlens into multiple smaller field angles. The light passing through each subdivided smaller field angle of each microlens of the MLA is received by a separate part of the detector. The UWA-2HSR-SIS is a new concept that not only obtains both high spatial resolution and high spectral resolution based on a single sensor for the first time, but also has an ultra-wide field angle in the horizontal plane, can obtain spectral information covering the full spectral range of interest in real time, and is very stable against various disturbances. The UWA-2HSR-SIS has great potential for remote sensing electro-optical reconnaissance sensors in the visible and near-infrared region.