Figure - available from: Nature
This content is subject to copyright. Terms and conditions apply.
Photodiode characteristics
a, Current–voltage characteristic curve under dark (blue) and illuminated (green) conditions. The series resistance Rs and shunt resistance Rsh are ~10⁶ Ω and 10⁹ Ω, respectively. For zero-bias operation, we estimate a noise-equivalent power of NEP = Ith/R ≈ 10⁻¹³ W Hz−1/2, where R ≈ 60 mA W⁻¹ is the (maximum) responsivity and Ith=4kBTΔf/Rsh\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${I}_{{\rm{th}}}=\sqrt{4{k}_{{\rm{B}}}T\Delta f/{R}_{{\rm{sh}}}}$$\end{document} the thermal noise, where kB is the Boltzmann constant, Δf is the bandwidth and T is the temperature. b, Dependence of the short-circuit photocurrent on the light intensity for different split-gate voltages. Importantly, the response is linear (I ∝ P), as assumed in equation (1).

Photodiode characteristics a, Current–voltage characteristic curve under dark (blue) and illuminated (green) conditions. The series resistance Rs and shunt resistance Rsh are ~10⁶ Ω and 10⁹ Ω, respectively. For zero-bias operation, we estimate a noise-equivalent power of NEP = Ith/R ≈ 10⁻¹³ W Hz−1/2, where R ≈ 60 mA W⁻¹ is the (maximum) responsivity and Ith=4kBTΔf/Rsh\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${I}_{{\rm{th}}}=\sqrt{4{k}_{{\rm{B}}}T\Delta f/{R}_{{\rm{sh}}}}$$\end{document} the thermal noise, where kB is the Boltzmann constant, Δf is the bandwidth and T is the temperature. b, Dependence of the short-circuit photocurrent on the light intensity for different split-gate voltages. Importantly, the response is linear (I ∝ P), as assumed in equation (1).

Source publication
Article
Full-text available
Machine vision technology has taken huge leaps in recent years, and is now becoming an integral part of various intelligent systems, including autonomous vehicles and robotics. Usually, visual information is captured by a frame-based camera, converted into a digital format and processed afterwards using a machine-learning algorithm such as an artif...

Similar publications

Article
Full-text available
The development of neuromorphic visual systems has recently gained momentum due to their potential in areas such as autonomous vehicles and robotics. However, current machine visual systems based on silicon technology usually contain photosensor arrays, format conversion, memory and processing modules. As a result, the redundant data shuttling betw...

Citations

... Another series of research focuses on the development of intelligent sensors that transfer certain numerical operations to the electronic image sensors or their vicinity [24]. Instead of extracting raw image pixels, only effective data, including the spatial image features [25][26][27][28] and temporal dynamics [29][30][31], are read out. This approach alleviates the bandwidth pressure between sensing and computing. ...
Article
Full-text available
Image processing, transmission, and reconstruction constitute a major proportion of information technology. The rapid expansion of ubiquitous edge devices and data centers has led to substantial demands on the bandwidth and efficiency of image processing, transmission, and reconstruction. The frequent conversion of serial signals between the optical and electrical domains, coupled with the gradual saturation of electronic processors, has become the bottleneck of end-to-end machine vision. Here, we present an optical parallel computational array chip (OPCA chip) for end-to-end processing, transmission, and reconstruction of optical intensity images. By proposing constructive and destructive computing modes on the large-bandwidth resonant optical channels, a parallel computational model is constructed to implement end-to-end optical neural network computing. The OPCA chip features a measured response time of 6 ns and an optical bandwidth of at least 160 nm. Optical image processing can be efficiently executed with minimal energy consumption and latency, liberated from the need for frequent optical–electronic and analog–digital conversions. The proposed optical computational sensor opens the door to extremely high-speed processing, transmission, and reconstruction of visible contents with nanoseconds response time and terahertz bandwidth.
... Miniature computational spectrometers leveraging 2D materials utilize advanced algorithms to enhance spectral resolution and reconstruction accuracy without degrading device performance [29][30][31] . Nonetheless, these solutions face limitations, including restricted Fermi level tunability due to gate voltage modulation, challenges in dark current suppression, and constraints on the photoresponse matrix dimensionality arising from linear relationship between photogenerated current and incident light, which collectively hinder accurate, high-resolution spectral analysis [32][33][34][35][36][37] . ...
Preprint
Full-text available
In the domain of spectroscopy, miniaturization efforts frequently encounter notable challenges, particularly in achieving high spectral resolution and construction accuracy. Here, we introduce a computational spectrometer powered by a nonlinear photonic memristor featuring a WSe 2 homojunction. This innovation overcomes traditional limitations, such as constrained Fermi level tunability, persistent dark current depression, and limited photoresponse dimensionality, by leveraging dynamic energy band modulation via palladium (Pd) ion migration. This approach engenders pronounced nonlinearities in the spectral response, significantly enhancing spectral resolution and measurement precision. By integrating this system with a bespoke nonlinear neural network, our spectrometer achieves unprecedented peak wavelength accuracy (0.18 nm) and spectral resolution (2 nm) over a comprehensive 400–800 nm bandwidth. This development heralds a paradigm shift towards compact, highly efficient spectroscopic instruments and establishes a versatile framework for their application across a broad spectrum of material systems.
... 2D materials with electrically tunable photoresponse, on the other hand, offer highly tunable functionalities beyond the constituent materials and can overcome these limits [13][14][15][16][17][18][19]. Given that the 3D bulk materials possess mature semiconductor technology and abundant properties, combining 2D and 3D semiconductors offers opportunities to develop novel optoelectronic devices [3,4,[20][21][22][23][24][25][26][27][28][29][30][31][32]. ...
Article
Full-text available
Multiband recognition technology is being extensively investigated because of the increasing demand for on-chip, multifunctional, and sensitive devices that can distinguish coincident spectral information. Most existing multiband imagers use large arrays of photodetectors to capture different spectral components, from which their spectrum is reconstructed. A single device embedded with a convolutional neural network (CNN) capable of recognizing multiband photons allows the footprints of multiband recognition chips to be scaled down while achieving spectral resolution approaching that of benchtop systems. Here, we report a multiple and broadband photodetector based on 2D/3D van der Waals p/n/p heterostructures [p-germanium (Ge)/n-molybdenum disulfide $({{\rm MoS}_2})/{\rm p}$ ( M o S 2 ) / p -black phosphorus (bP)] with an electrically tunable transport-mediated spectral photoresponse. The devices show bias-tunable multiband photodetection (visible, short-wave infrared, and mid-wave infrared photoresponse). Further combination with the CNN algorithm enables crosstalk suppression of photoresponse to different wavelengths and high-accuracy blackbody radiation temperature recognition. The deep multiband photodetection strategies demonstrated in this work may open pathways towards the integration of multiband vision for application in on-chip neural network perception.
... 6 In this regard, the International Road map for Devices and Systems (IRDS) 2022 lists two-dimensional (2D) materials as probable candidates for standard complementary metal-oxide semiconductor (CMOS) technology in the coming years. 7 2D materials not only offer sub-nanometer thicknesses but also provide a wide range of opportunities for neuromorphic computing, 8,9 photonic integrated circuits, 10,11 and quantum technologies. 12,13 Therefore, intensive efforts have been made in the past decade to make 2D material-based very large scale integration (VLSI) technology a reality. ...
Article
Full-text available
Contact resistance is a multifaceted challenge faced by the 2D materials community. Large Schottky barrier heights and gap-state pinning are active obstacles that require an integrated approach to achieve the development of high-performance electronic devices based on 2D materials. In this work, we present semiconducting PtSe2 field effect transistors with all-van-der-Waals electrode and dielectric interfaces. We use graphite contacts, which enable high ION/IOFF ratios up to 109 with currents above 100 μA μm−1 and mobilities of 50 cm2 V−1 s−1 at room temperature and over 400 cm2 V−1 s−1 at 10 K. The devices exhibit high stability with a maximum hysteresis width below 36 mV nm−1. The contact resistance at the graphite−PtSe2 interface is found to be below 700 Ω μm. Our results present PtSe2 as a promising candidate for the realization of high-performance 2D circuits built solely with 2D materials.
... With the development of autonomous systems in a myriad of fields such as robotics, IoT and self-driving vehicles, the demand for perception system comprising sensor and computational networks with less area and power consumption has been soaring [70][71][72]. Research above mainly focused on mimicking electrical signal processing in biological neuron, while in-sensor computing neuron devices were neglected. The burgeoning sensory neuron can directly transform raw data from the real world to electrical spiking signals, instead of using conversion circuit such as an analog to digital converter (ADC) or other hardware components for cascade connection with the von Neumann architecture [20]. ...
Article
Full-text available
Neuromorphic computing (NC), considered as a promising candidate for future computer architecture, can facilitate more biomimetic intelligence while reducing energy consumption. Neuron is one of the critical building blocks of NC systems. Researchers have been engaged in promoting neuron devices with better electrical properties and more biomimetic functions. Two-dimensional (2D) materials, with ultrathin layers, diverse band structures, featuring excellent electronic properties and various sensing abilities, are promised to realize these requirements. Here, the progress of artificial neurons brought by 2D materials is reviewed, from the perspective of electrical performance of neuron devices, from stability, tunability to power consumption and on/off ratio. Rose up to system-level applications, algorithms and hardware implementation of spiking neural network, stochastic neural network and artificial perception system based on 2D materials are reviewed. 2D materials not only facilitate the realization of NC systems but also increase the integration density. Finally, current challenges and perspectives on developing 2D material-based neurons and NC systems are systematically analyzed, from the bottom 2D materials fabrication to novel neural devices, more brain-like computational algorithms and systems.
... An unsatisfactory aspect of this approach is that such specific materials generally achieve tunable light responsiveness through ion migration, which has a relatively slow rate of migration 88 . To overcome this drawback, Mennel et al. 89 created a dual-gate transistor (Fig. 6b). Each transistor operates in short circuit, with its responsiveness individually set by a pair of gate voltages. ...
... Mennel and colleagues developed a sensor array containing an ANN using photodiodes 89 . The array consists of N pixels, with each pixel comprising M sub-pixels, where each sub-pixel corresponds to a photodiode. ...
Article
Full-text available
The structure and mechanism of the human visual system contain rich treasures, and surprising effects can be achieved by simulating the human visual system. In this article, starting from the human visual system, we compare and discuss the discrepancies between the human visual system and traditional machine vision systems. Given the wide variety and large volume of visual information, the use of non-von Neumann structured, flexible neuromorphic vision sensors can effectively compensate for the limitations of traditional machine vision systems based on the von Neumann architecture. Firstly, this article addresses the emulation of retinal functionality and provides an overview of the principles and circuit implementation methods of non-von Neumann computing architectures. Secondly, in terms of mimicking the retinal surface structure, this article introduces the fabrication approach for flexible sensor arrays. Finally, this article analyzes the challenges currently faced by non-von Neumann flexible neuromorphic vision sensors and offers a perspective on their future development.
... Progress of intelligence hardware accelerates the growth of artificial neural networks (ANNs) [1][2][3]. Optical neural networks (ONNs) are considered promising candidates for next-generation high-performance hardware processors due to their inherent advantages compared with traditional electronic neural networks [4][5][6]. Though the potential of ONNs for linear operation has been validated, the lack of optical nonlinearity remains an open challenge [7][8][9][10]. ...
... Finally, the nonlinear model of our activator can be described by Eqs. (2) and (3). ...
Article
Full-text available
As an alternative solution to surpass electronic neural networks, optical neural networks (ONNs) offer significant advantages in terms of energy consumption and computing speed. Despite the optical hardware platform could provide an efficient approach to realizing neural network algorithms than traditional hardware, the lack of optical nonlinearity limits the development of ONNs. Here, we proposed and experimentally demonstrated an all-optical nonlinear activator based on the stimulated Brillouin scattering (SBS). Utilizing the exceptional carrier dynamics of SBS, our activator supports two types of nonlinear functions, saturable absorption and rectified linear unit (Relu) models. Moreover, the proposed activator exhibits large dynamic response bandwidth (∼11.24 GHz), low nonlinear threshold (∼2.29 mW), high stability, and wavelength division multiplexing identities. These features have potential advantages for the physical realization of optical nonlinearities. As a proof of concept, we verify the performance of the proposed activator as an ONN nonlinear mapping unit via numerical simulations. Simulation shows that our approach achieves comparable performance to the activation functions commonly used in computers. The proposed approach provides support for the realization of all-optical neural networks.
... In the visible range, inspired by the intelligent biological approach of the retina's neurons, which not only detect light stimuli but also engage in initial image processing, memlogic optical sensing or in-sensor computing emerges and provides a promising route to address the sensing and processing bottleneck 8 . These innovative works employ novel materials such as two-dimensional material [9][10][11] , metal-oxide memristors 12 , phase-change materials (PCMs) 13 etc and have shown significantly reduced power consumption, reduced time delays, and minimized hardware redundancy. More remarkably, unprecedentedly new functionalities have also been demonstrated such as supervised and unsupervised learning and training, highlighting the compelling advantages of memlogic sensors over traditional ones 14 . ...
Article
Full-text available
Optical sensors with in-cell logic and memory capabilities offer new horizons in realizing machine vision beyond von Neumann architectures and have been attempted with two-dimensional materials, memristive oxides, phase-changing materials etc. Noting the unparalleled performance of superconductors with both quantum-limited optical sensitivities and ultra-wide spectrum coverage, here we report a superconducting memlogic long-wave infrared sensor based on the bistability in hysteretic superconductor-normal phase transition. Driven cooperatively by electrical and optical pulses, the device offers deterministic in-sensor switching between resistive and superconducting (hence dissipationless) states with persistence > 10 ⁵ s. This results in a resilient reconfigurable memlogic system applicable for, e.g., encrypted communications. Besides, a high infrared sensitivity at 12.2 μm is achieved through its in-situ metamaterial perfect absorber design. Our work opens the avenue to realize all-in-one superconducting memlogic sensors, surpassing biological retina capabilities in both sensitivity and wavelength, and presents a groundbreaking opportunity to integrate visional perception capabilities into superconductor-based intelligent quantum machines.
... In this study, we leverage the tunable optoelectronic characteristics inherent to two-dimensional materials [25][26][27][28][29][30][31][32], alongside the light resonant capability of a metasurface array, to propose a dynamically reconfigurable diffractive ONN based on a hybrid graphene metasurface (HGM) array. The designed neuron size is on the order of 20 micrometres, in contrast to the prior dimension of approximately 20 millimetres as reported in Ref. [23], which is accompanied by a corresponding shift in the operational frequency regime from gigahertz (GHz) to terahertz (THz). ...
... The designed neuron size is on the order of 20 micrometres, in contrast to the prior dimension of approximately 20 millimetres as reported in Ref. [23], which is accompanied by a corresponding shift in the operational frequency regime from gigahertz (GHz) to terahertz (THz). In addition, the prospective modulation speed of this platform is potentially limited only by the intrinsic carrier relaxation time of graphene, which operates at the picosecond scale [33,34], akin to other neural networks employing two-dimensional materials [29]. This reconfigurable ONN performs the multiplication of the project image with a complex-valued transmission matrix. ...
Article
Full-text available
In recent years, optical neural networks (ONNs) have received considerable attention for their intrinsic parallelism and low energy consumption, making them a vital area of research. However, the current passive diffractive ONNs lack dynamic tunability after fabrication for specific tasks. Here, we propose a dynamically reconfigurable diffractive deep neural network based on a hybrid graphene metasurface array, wherein the transmission and refractive index of each pixel can be finely adjusted via gate voltage. This capability enables the tailored modulation of the incident light’s amplitude and phase at each pixel, aligning with specific task requirements. The simulation results show the attainability of a dynamic modulation range of 7.97dB (ranging from −8.56dB to −0.591dB). Additionally, this proposed diffractive neural network platform incorporates an ultrathin structure comprising a one-atom-thick graphene layer and nanoscale metallic metastructures, rendering it compatible with complementary metal-oxide-semiconductor technology. Notably, a classification accuracy of 92.14% for a single-layer neural network operating in the terahertz spectrum is achieved based on the calculation result. This proposed platform presents compelling prospects for constructing various artificial neural network architectures with applications ranging from drug screening to automotive driving and vision sensing.
... Machine vision plays a crucial role in artificial intelligence (AI) and the Internet of Things (IoT), enabling the realtime perception and recognition of visual objects in the real world. [1][2][3][4][5] Especially, to deal with the explosively growing data generated at sensory terminals in an energyefficient way, neuromorphic machine vision system (NMVS) inspired by the human visual system has developed rapidly in recent years, [6][7][8][9][10][11][12][13][14][15] in which the frontend retinomorphic vision sensors can preprocess the captured images directly at sensory terminals, significantly reducing the latency and power consumption. 2,3,[16][17][18][19][20] Given the large intensity range of natural light (approximately 280 dB), 21 the accurate perception and recognition of objects from complex illumination conditions is still a great challenge for the practical applications of NMVS. ...
Article
Full-text available
Bioinspired neuromorphic machine vision system (NMVS) that integrates retinomorphic sensing and neuromorphic computing into one monolithic system is regarded as the most promising architecture for visual perception. However, the large intensity range of natural lights and complex illumination conditions in actual scenarios always require the NMVS to dynamically adjust its sensitivity according to the environmental conditions, just like the visual adaptation function of the human retina. Although some opto‐sensors with scotopic or photopic adaption have been developed, NMVSs, especially fully flexible NMVSs, with both scotopic and photopic adaptation functions are rarely reported. Here we propose an ion‐modulation strategy to dynamically adjust the photosensitivity and time‐varying activation/inhibition characteristics depending on the illumination conditions, and develop a flexible ion‐modulated phototransistor array based on MoS 2 /graphdiyne heterostructure, which can execute both retinomorphic sensing and neuromorphic computing. By controlling the intercalated Li ⁺ ions in graphdiyne, both scotopic and photopic adaptation functions are demonstrated successfully. A fully flexible NMVS consisting of front‐end retinomorphic vision sensors and a back‐end convolutional neural network is constructed based on the as‐fabricated 28 × 28 device array, demonstrating quite high recognition accuracies for both dim and bright images and robust flexibility. This effort for fully flexible and monolithic NMVS paves the way for its applications in wearable scenarios.