Figure - available from: Scientific Reports
This content is subject to copyright. Terms and conditions apply.
Structure and operational principles of CGC-LIDAR. (a) Perspective illustration of a CGC-LIDAR. The circularly polarized light vertically incident to coupler A excites a guided light which propagates uniformly from the center to outside. The guided light is radiated from coupler C, concentrated near point F1, and converted to 2D-scannable five parallel beams by refraction at the side surfaces of the rod lens. (b) Cross-sectional illustration of a CGC-LIDAR. Concentric circular gratings are formed on the Ta2O5 / SiO2 layers of the quartz plate and on the ITO/SiN layers of the HR plate. The upper and lower gratings and electrodes are divided from the inside to outside into the three areas of A, B, and C. The LC layer sandwiched by the two plates is homogeneously aligned along the rotational direction around the axis L due to the orientation force of the gratings. CGC-LIDAR is built up concentrically with an RLE and a CGCP which are joined with a matching oil. (c) Perspective-view SEM photographs of coupler A. (d) Top-view SEM photograph of coupler A. (e) Configuration diagram of electrodes for areas A, B, and C. The electrode B is divided into 60 areas of Bk (k = 0 to 59) in the rotational direction. While the electrodes A and C, shaped as circles or half rings, have the same shapes between the Al layer and the ITO layer, the Bk electrodes with a zigzag shape are reversed between them. The electrode C is divided in half and the gaps between them are used for the wires of electrodes A and Bk. Since the AC signals applied to the electrodes are reversed between the Al electrode and the ITO electrode, the differences in voltages between them are doubled. (f) Relationships between effective refractive index (ERI) and waveguide thickness. When AC signals are applied to the electrodes, the LC alignment is raised, and ERI falls in inverse proportion to the AC amplitude.

Structure and operational principles of CGC-LIDAR. (a) Perspective illustration of a CGC-LIDAR. The circularly polarized light vertically incident to coupler A excites a guided light which propagates uniformly from the center to outside. The guided light is radiated from coupler C, concentrated near point F1, and converted to 2D-scannable five parallel beams by refraction at the side surfaces of the rod lens. (b) Cross-sectional illustration of a CGC-LIDAR. Concentric circular gratings are formed on the Ta2O5 / SiO2 layers of the quartz plate and on the ITO/SiN layers of the HR plate. The upper and lower gratings and electrodes are divided from the inside to outside into the three areas of A, B, and C. The LC layer sandwiched by the two plates is homogeneously aligned along the rotational direction around the axis L due to the orientation force of the gratings. CGC-LIDAR is built up concentrically with an RLE and a CGCP which are joined with a matching oil. (c) Perspective-view SEM photographs of coupler A. (d) Top-view SEM photograph of coupler A. (e) Configuration diagram of electrodes for areas A, B, and C. The electrode B is divided into 60 areas of Bk (k = 0 to 59) in the rotational direction. While the electrodes A and C, shaped as circles or half rings, have the same shapes between the Al layer and the ITO layer, the Bk electrodes with a zigzag shape are reversed between them. The electrode C is divided in half and the gaps between them are used for the wires of electrodes A and Bk. Since the AC signals applied to the electrodes are reversed between the Al electrode and the ITO electrode, the differences in voltages between them are doubled. (f) Relationships between effective refractive index (ERI) and waveguide thickness. When AC signals are applied to the electrodes, the LC alignment is raised, and ERI falls in inverse proportion to the AC amplitude.

Source publication
Article
Full-text available
Sophisticated non-mechanical technology for LIDARs is needed to realize safe autonomous cars. We have confirmed the operating principle of a non-mechanical LIDAR by combining concentric circular-grating couplers (CGCs) with a coaxially aligned rod lens. Laser light incident vertically on the center of the inner CGC along the center axis of the lens...

Citations

... These include OPA-based solidstate LiDARS (Li and Ibanez-Guzman, 2020) (see also Sec. 10.2), Multiple-Input, Multiple-Output (MIMO) radars (Sun et al., 2020), affordable event cameras (Gallego et al., 2022), low-cost multi-frequency multi-GNSS receivers (Nguyen et al., 2021), smart AI (Artificial Intelligent) devices embedding microcontrollers able to locally execute deep neural network learning workloads (Ajani et al., 2021), and others. In the next few years, 360-degrees solid-state scanning LiDAR (Nishiwaki, 2021) and low-cost megapixel-resolution depth cameras based on piezoelectric effect (Atalar et al., 2022) could likely enter market, making the spatial perception of mobile robots even more effective. ...
Preprint
A sensor is a device that converts a physical parameter or an environmental characteristic (e.g., temperature, distance, speed, etc.) into a signal that can be digitally measured and processed to perform specific tasks. Mobile robots need sensors to measure properties of their environment, thus allowing for safe navigation, complex perception and corresponding actions and effective interactions with other agents that populate it. Sensors used by mobile robots range from simple tactile sensors, such as bumpers, to complex vision-based sensors such as structured light cameras. All of them provide a digital output (e.g., a string, a set of values, a matrix, etc.) that can be processed by the robot's computer. Such output is typically obtained by discretizing one or more analog electrical signals by using an Analog to Digital Converter (ADC) included in the sensor. In this chapter we present the most common sensors used in mobile robotics, providing an introduction to their taxonomy, basic features and specifications. The description of the functionalities and the types of applications follows a bottom-up approach: the basic principles and components on which the sensors are based are presented before describing real-world sensors, which are generally based on multiple technologies and basic devices.
Article
Full-text available
Identifying road obstacles hidden from the driver's field of view can ensure road safety in transportation. Current driver assistance systems such as 2D head‐up displays are limited to the projection area on the windshield of the car. An augmented reality holographic point cloud video projection system is developed to display objects aligned with real‐life objects in size and distance within the driver's field of view. Light Detection and Ranging (LiDAR) point cloud data collected with a 3D laser scanner is transformed into layered 3D replay field objects consisting of 400 k points. GPU‐accelerated computing generated real‐time holograms 16.6 times faster than the CPU processing time. The holographic projections are obtained with a Spatial Light Modulator (SLM) (3840×2160 px) and virtual Fresnel lenses, which enlarged the driver's eye box to 25 mm × 36 mm. Real‐time scanned road obstacles from different perspectives provide the driver a full view of risk factors such as generated depth in 3D mode and the ability to project any scanned object from different angles in 360°. The 3D holographic projection technology allows for maintaining the driver's focus on the road instead of the windshield and enables assistance by projecting road obstacles hidden from the driver's field of view.