Figure 1 - available from: Artificial Intelligence Review
This content is subject to copyright. Terms and conditions apply.
Structures of focal apposition compound eyes. a Compound eye of the dragonfly Aeschna sp (Source: de.wikipedia.org, David L. Green.) b1 schematic drawing of a focal apposition compound eye (bee). b2 Visual cross-section through a visual unit (ommatidium): 1 lens, 2 crystalline cone, 3 rhabdom, 4 receptor cell, sensory cell, 5 screening pigment cells. b3 Light path and optical signal processing in the ommatidium of a focal apposition eye. b4 Cross-section of b2. c1 ocellus. c2 Light path through an ocellus, underfocusing. d Vertebrate eye. e Acuity of the mosaic-like image formation of an apposition compound eye, depending on the number of ommatidia. f resolution with a curved visual surface. g Resolution with a flat visual surface. h Resolution with a short crystalline cone. i Resolution with a long crystalline cone. j Distance estimation in a focal apposition compound eye

Structures of focal apposition compound eyes. a Compound eye of the dragonfly Aeschna sp (Source: de.wikipedia.org, David L. Green.) b1 schematic drawing of a focal apposition compound eye (bee). b2 Visual cross-section through a visual unit (ommatidium): 1 lens, 2 crystalline cone, 3 rhabdom, 4 receptor cell, sensory cell, 5 screening pigment cells. b3 Light path and optical signal processing in the ommatidium of a focal apposition eye. b4 Cross-section of b2. c1 ocellus. c2 Light path through an ocellus, underfocusing. d Vertebrate eye. e Acuity of the mosaic-like image formation of an apposition compound eye, depending on the number of ommatidia. f resolution with a curved visual surface. g Resolution with a flat visual surface. h Resolution with a short crystalline cone. i Resolution with a long crystalline cone. j Distance estimation in a focal apposition compound eye

Source publication
Article
Full-text available
An artificial compound eye system is the bionic system of natural compound eyes with much wider field-of-view, better capacity to detect moving objects and higher sensitivity to light intensity than ordinary single-aperture eyes. In recent years, renewed attention has been paid to the artificial compound eyes, due to their better characteristics in...

Similar publications

Article
Full-text available
Skin-attachable and flexible strain sensors have attracted extensive interests because they enable to monitor individual physical actions, health status, and vibrations of sound. In this paper, multi-walled carbon nanotubes (MWCNTs) are transfer-printed into Ecoflex substrate to fabricate the MWCNT grid/Ecoflex strain sensors. We investigated the e...
Article
Full-text available
To achieve a system combining large field of view (FOV), high image quality, high 3D detection accuracy, and high-speed tracking in a close shot, a single-layered compound eye with seven ommatidia is developed. The functional relationship among the FOV, optical axis angle, and overlapping area is established mathematically. Based on the relationshi...

Citations

... Artificial compound eye (ACE) is a type of camera with a miniature volume and large FOV. Different kinds of artificial compound eye systems have been proposed [1][2][3]. The ACE-captured image is an array of sub-images with slightly different viewpoints. ...
... This will result in much less contextual information being used. 3. ...
... Floreano et al. proposed the CurvACE (Curved Artificial Compound Eye) [23]. Many other ACE systems can be found in [3,[24][25][26]. ...
Article
Full-text available
An artificial compound eye consists of multiple apertures that allow for a large field of view (FOV) while maintaining a small size. Each aperture captures a sub-image, and multiple sub-images are needed to reconstruct the full FOV. The reconstruction process is depth-related due to the parallax between adjacent apertures. This paper presents an all-in-focus 3D reconstruction method for a specific type of artificial compound eye called the electronic cluster eye (eCley). The proposed method uses edge matching to address the edge blur and large textureless areas existing in the sub-images. First, edges are extracted from each sub-image, and then a matching operator is applied to match the edges based on their shape context and intensity. This produces a sparse matching result that is then propagated to the whole image. Next, a depth consistency check and refinement method is performed to refine the depth of all sub-images. Finally, the sub-images and depth maps are merged to produce the final all-in-focus image and depth map. The experimental results and comparative analysis demonstrate the effectiveness of the proposed method.
... The compound eyes can be divided into two types according to the imaging principle: the apposition compound eyes and the superposition compound eyes. [28][29][30][31][32] The former can be regarded as a one-to-one relationship, while the latter can be considered as a oneto-many relationship. 29 Our work is inspired by the superposition compound eye, 22 i.e., with limited number of detectors, a much higher image resolution than the number of detectors can be achieved. ...
... [28][29][30][31][32] The former can be regarded as a one-to-one relationship, while the latter can be considered as a oneto-many relationship. 29 Our work is inspired by the superposition compound eye, 22 i.e., with limited number of detectors, a much higher image resolution than the number of detectors can be achieved. Similar to strepsipteran insects, the field-of-view of each microlens used in our system is subdivided into and represented by "chunks," 32 resulting in a sub-image of several thousand pixels. ...
Article
Real-time computational ghost imaging (CGI) has received significant attention in recent years to overcome the trade-off between long acquisition time and high reconstructed image quality of CGI. Inspired by compound eyes, we propose a parallel computational ghost imaging with modulation patterns multiplexing and permutation to achieve a faster and high-resolution CGI. With modulation patterns multiplexing and permutation, several small overlapping fields-of-view can be obtained; meanwhile, the difficulty in alignment of illumination light field and multiple detectors can be well resolved. The method combining compound eyes with multi-detectors to capture light intensity can resolve the issue of a gap between detector units in the array detector. Parallel computation facilitates significantly reduced acquisition time, while maintaining reconstructed quality without compromising the sampling ratio. Experiments indicate that using m×m detectors reduce modu- lation pattern count, projector storage, and projection time to around 1/m^2 of typical CGI methods, while increasing image resolution to m^2 times. This work greatly promotes the practicability of parallel computational ghost imaging and provides optional solution for real-time computational ghost imaging.
... Compared with traditional optical systems, it has the advantages of small size, large field of view, high time sensitivity and having polarized vision [1][2][3][4]. Generally, the bionic compound eyes are designed as a planar microlens array or a curved microlens array [5][6][7][8][9][10]. The disadvantage of planar microlens arrays is the small field of view [11][12][13]. ...
Article
Full-text available
Wide field of view and polarization imaging capabilities are crucial for implementation of advanced imaging devices. However, there are still great challenges in the integration of such optical systems. Here, we report a bionic compound eye metasurface that can realize full Stokes polarization imaging in a wide field of view. The bionic compound eye metasurface consists of a bifocal metalens array in which every three bifocal metalenses form a subeye. The phase of the bifocal metalens is composed of gradient phase and hyperbolic phase. Numerical simulations show that the bifocal metalens can not only improve the focusing efficiency in the oblique light but also correct the aberration caused by the oblique incident light. And the field of view of the bionic compound eye metasurface can reach 120° × 120°. We fabricated a bionic compound eye metasurface which consists of three subeyes. Experiments show that the bionic compound eye metasurface can perform near diffraction-limited polarization focusing and imaging in a large field of view. The design method is generic and can be used to design metasurfaces with different materials and wavelengths. It has great potential in the field of robot polarization vision and polarization detection.
... Compared to human eyes, the insect compound eyes can detect highly dynamic moving targets through multiple sub-eye imaging and generate the global optical flow field for flight process estimation by receiving local motion signals, providing support for landing and obstacle avoidance [4,5]. The artificial compound eye (ACE) [6] can surpass the visual task of the traditional imaging system by imitating the characteristics of the insect compound eye in some specific environments [7,8]. Inspired by this phenomenon, researchers conducted experiments on optical flow field estimation based on ACE, which offers important research value for the realization of accurate navigation and obstacle avoidance of unmanned aerial vehicle (UAV) and unmanned ground vehicle (UGV) [9][10][11]. ...
Article
Full-text available
In this article, a multi-scale optical flow estimation for image taken by artificial compound eye (ACE) is investigated. The optical flow estimation of ACE needs to be adapted by designing algorithms according to its unique multi-aperture characteristics A more general filter for the regularization term, rather than a single iterative solution in the traditional variational model, is devised using the non-subsampled contourlet transform (NSCT) to enforce band-decomposition to estimate optical flow field. To circumvent the spillover and error of the single-aperture fringe flow field, a flow gradient weight is introduced to suppress it and to enhance motion details. Additionally, low-pass subbands adopt Bayes threshold operation with the advantage of efficiently eliminating outliers, and more high-pass subbands adopt guided filter (GF) with the advantage of separating important details from outliers. The prominent feature of the proposed method is that the accuracy of optical flow estimation is improved effectively by eliminating outliers. Eventually, the superiority of explored optical flow estimation is demonstrated through experimental results.
... With the ever increasing demand for compactness of imaging optics, great efforts have been motivated to develop lensless cameras during the past few years. To date two prevailing lensless methods exist, natural compound-eye mimicry [1] and heuristic point spread function (PSF) engineering [2]. ...
Article
Full-text available
Lensless cameras are a class of imaging devices that shrink the physical dimensions to the very close vicinity of the image sensor by replacing conventional compound lenses with integrated flat optics and computational algorithms. Here we report a diffractive lensless camera with spatially-coded Voronoi-Fresnel phase to achieve superior image quality. We propose a design principle of maximizing the acquired information in optics to facilitate the computational reconstruction. By introducing an easy-to-optimize Fourier domain metric, Modulation Transfer Function volume (MTFv), which is related to the Strehl ratio, we devise an optimization framework to guide the optimization of the diffractive optical element. The resulting Voronoi-Fresnel phase features an irregular array of quasi-Centroidal Voronoi cells containing a base first-order Fresnel phase function. We demonstrate and verify the imaging performance for photography applications with a prototype Voronoi-Fresnel lensless camera on a 1.6-megapixel image sensor in various illumination conditions. Results show that the proposed design outperforms existing lensless cameras, and could benefit the development of compact imaging systems that work in extreme physical conditions.
... Arthropods have evolutionarily developed various kinds of compound eyes based on their physical size and living environments. Many artificial compound eyes have been proposed that imitate such natural compound eyes [1][2][3][4]. An apposition compound eye is common among such diurnal insects as bees and dragonflies, and its structure is simple: an aggregation of ommatidia, each of which has only one image sensor pixel and one microlens. ...
Article
Full-text available
We propose a design approach for a thin image scanner using the concept of an apposition compound eye comprising many imaging units that take only one pixel image. Although light shielding between adjacent imaging units is always one of the main issues for an artificial compound eye, a simple plane structure using three aperture array layers on two glued glass plates prevents such stray light. Our prototyped scanner, with only 6.8-mm thickness as a packaged module, has 632 microlenses with 200-dpi resolution, resulting in a field of view of 80 mm. The evaluated images show no ghost images.
... Unlike vertebrate single-aperture eyes, compound eyes (refs. 8,9 ) are multiaperture systems made up of small eyes with different viewing angles; thus, their FOV can be as wide as that of fish eyes. Moreover, due to independent sensing neurons, with each corresponding to an ommatidium, and the parallel processing procedure, compound eyes have the advantages of a high update rate and high sensitivity to motion that human eyes and fish eyes do not have. ...
Article
Full-text available
Optical measurement systems suffer from a fundamental tradeoff between the field of view (FOV), the resolution and the update rate. A compound eye has the advantages of a wide FOV, high update rate and high sensitivity to motion, providing inspiration for breaking through the constraint and realizing high-performance optical systems. However, most existing studies on artificial compound eyes are limited by complex structure and low resolution, and they focus on imaging instead of precise measurement. Here, a high-performance lensless compound eye microsystem is developed to realize target motion perception through precise and fast orientation measurement. The microsystem splices multiple sub-FOVs formed by long-focal subeyes, images targets distributed in a panoramic range into a single multiplexing image sensor, and codes the subeye aperture array for distinguishing the targets from different sub-FOVs. A wide-field and high resolution are simultaneously realized in a simple and easy-to-manufacture microelectromechanical system (MEMS) aperture array. Moreover, based on the electronic rolling shutter technique of the image sensor, a hyperframe update rate is achieved by the precise measurement of multiple time-shifted spots of one target. The microsystem achieves an orientation measurement accuracy of 0.0023° (3σ) in the x direction and 0.0028° (3σ) in the y direction in a cone FOV of 120° with an update rate ~20 times higher than the frame rate. This study provides a promising approach for achieving optical measurements with comprehensive high performance and may have great significance in various applications, such as vision-controlled directional navigation and high-dynamic target tracking, formation and obstacle avoidance of unmanned aerial vehicles.
... Part of the reason for this is that engineered visual sensors are currently outclassed by insect eyesthey have a smaller field of view and slower update rates. Multiple successful designs for artificial compound eyes have been proposed in the academic literature [54], [167], but the lack of mass production and hence wide availability of such sensors is related to the absence of the full autonomy stackand hence the promise of widespread realworld application. ...
Article
Full-text available
Autonomous robots are expected to perform a wide range of sophisticated tasks in complex, unknown environments. However, available onboard computing capabilities and algorithms represent a considerable obstacle to reaching higher levels of autonomy, especially as robots get smaller and the end of Moore's law approaches. Here, we argue that inspiration from insect intelligence is a promising alternative to classic methods in robotics for the artificial intelligence (AI) needed for the autonomy of small, mobile robots. The advantage of insect intelligence stems from its resource efficiency (or parsimony) especially in terms of power and mass. First, we discuss the main aspects of insect intelligence underlying this parsimony: embodiment, sensory-motor coordination, and swarming. Then, we take stock of where insect-inspired AI stands as an alternative to other approaches to important robotic tasks such as navigation and identify open challenges on the road to its more widespread adoption. Last, we reflect on the types of processors that are suitable for implementing insect-inspired AI, from more traditional ones such as microcontrollers and field-programmable gate arrays to unconventional neuromorphic processors. We argue that even for neuromorphic processors, one should not simply apply existing AI algorithms but exploit insights from natural insect intelligence to get maximally efficient AI for robot autonomy.
... Our results provide valuable evidence that geometrical patterns based on the semiconcentric growth play an important role in biological patterning. As the optical characteristics of biological visual systems have been leveraged in new technologies, such as artificial compound eyes, 45,46 information regarding semiconcentric ommatidial growth may have potential applications in bionics-related research in the future. ...
Article
Tiling patterns are observed in many biological structures. The compound eye is an interesting example of tiling and is often constructed by hexagonal arrays of ommatidia, the optical unit of the compound eye. Hexagonal tiling may be common due to mechanical restrictions such as structural robustness, minimal boundary length, and space-filling efficiency. However, some insects exhibit tetragonal facets.1, 2, 3, 4 Some aquatic crustaceans, such as shrimp and lobsters, have evolved with tetragonal facets.5, 6, 7, 8 Mantis shrimp is an insightful example as its compound eye has a tetragonal midband region sandwiched between hexagonal hemispheres.⁹,¹⁰ This casts doubt on the naive explanation that hexagonal tiles recur in nature because of their mechanical stability. Similarly, tetragonal tiling patterns are also observed in some Drosophila small-eye mutants, whereas the wild-type eyes are hexagonal, suggesting that the ommatidial tiling is not simply explained by such mechanical restrictions. If so, how are the hexagonal and tetragonal patterns controlled during development? Here, we demonstrate that geometrical tessellation determines the ommatidial tiling patterns. In small-eye mutants, the hexagonal pattern is transformed into a tetragonal pattern as the relative positions of neighboring ommatidia are stretched along the dorsal-ventral axis. We propose that the regular distribution of ommatidia and their uniform growth collectively play an essential role in the establishment of tetragonal and hexagonal tiling patterns in compound eyes.
... Video sensors inspired by compound eyes of insects are promising alternatives to digital cameras [7]. Such video sensors have no moving parts and do not require any control. ...
... Each ommatidium consists of a microlens (facet lens) and a small group of rhabdomere (photoreceptor) bundles, called the rhabdom. The pigments form opaque walls between adjacent ommatidia to avoid the light focused by one microlens on the receptor of the adjacent channel [7]. There are no moving or dynamically transforming parts in the apposition compound eye, and it does not need to be controlled by the nervous system. ...
Article
Full-text available
This paper presents a two-dimensional mathematical model of compound eye vision. Such a model is useful for solving navigation issues for autonomous mobile robots on the ground plane. The model is inspired by the insect compound eye that consists of ommatidia, which are tiny independent photoreception units, each of which combines a cornea, lens, and rhabdom. The model describes the planar binocular compound eye vision, focusing on measuring distance and azimuth to a circular feature with an arbitrary size. The model provides a necessary and sufficient condition for the visibility of a circular feature by each ommatidium. On this basis, an algorithm is built for generating a training data set to create two deep neural networks (DNN): the first detects the distance, and the second detects the azimuth to a circular feature. The hyperparameter tuning and the configurations of both networks are described. Experimental results showed that the proposed method could effectively and accurately detect the distance and azimuth to objects. (c) Published in "Mathematics" Journal.