Optical Tactile Sensors: (a) an optical tactile transducer based on the principle of frustrated total internal reflection [45], (b) a structure of optical three-axis tactile sensor: a displacement of a sensing element fixed on flexible finger surface causes changes in light propagation in opto-fibers [37], (c) fingers with the sensitive optical sensors manipulating a light paper box [37], (d) photo of an optical 3 x 3 tactile array with magnetic field compatibility [77], (e) "GelSight" optical sensor consisting of a piece of clear elastomer coated with a reflective membrane senses the shape of the cookie surface [79], (f) finger configurations of the "GelSight" sensor [79]. 

Optical Tactile Sensors: (a) an optical tactile transducer based on the principle of frustrated total internal reflection [45], (b) a structure of optical three-axis tactile sensor: a displacement of a sensing element fixed on flexible finger surface causes changes in light propagation in opto-fibers [37], (c) fingers with the sensitive optical sensors manipulating a light paper box [37], (d) photo of an optical 3 x 3 tactile array with magnetic field compatibility [77], (e) "GelSight" optical sensor consisting of a piece of clear elastomer coated with a reflective membrane senses the shape of the cookie surface [79], (f) finger configurations of the "GelSight" sensor [79]. 

Source publication
Article
Full-text available
Tactile sensing is an essential element of autonomous dexterous robot hand manipulation. It provides information about forces of interaction and surface properties at points of contact between the robot fingers and the objects. Recent advancements in robot tactile sensing led to development of many computational techniques that exploit this importa...

Contexts in source publication

Context 1
... sensing is based on optical reflection between mediums with different refractive indices. Conventional optical tactile sensors consist of an array of infrared light- emitting diodes (LEDs) and photo detectors ( Figure 5(a)). The intensity of the light is proportional to the magnitude of the pressure [45]. Optical sensors can also be made sensitive to shear forces, e.g. Yussof et al. [37] devel- oped an optical three-axis tactile sensor for the fingertips of a two-fingered hand ( Figure 5(b)). The sensor consists of 41 sensing elements made from silicon rubber, a light source, an optical fiber-scope, and a current charged cou- pled device (CCD) camera. With the optical tactile sen- sor, the hand is capable of manipulating a light paper box ( Figure 5(c)). Kampmann et al. [76] embedded fiber op- tic sensors to a multi-modal tactile measuring system of a three-fingered robot gripper (Figure 7(d)). Xie et al. developed a flat 3x3 optical tactile sensor array ( Figure 5(d)) with elements of the sensor that are magnetic reso- nance compatible for use in Magnetic Resonance Imaging [77]. Johnson et al. [78] proposed a novel "GelSight" tac- tile sensor to capture surface textures using an elastomer coated with a reflective membrane and a camera with res- olution of up to 2 microns. (Figure 5(e)). A fingertip with a "GelSight" (Figure 5(f)) tactile sensor can measure the surface roughness and texture, the pressure distribution, [45], (b) a structure of optical three-axis tactile sensor: a displacement of a sensing element fixed on flexible finger surface causes changes in light propagation in opto-fibers [37], (c) fingers with the sensitive optical sensors manipulating a light paper box [37], (d) photo of an optical 3 x 3 tactile array with magnetic field compatibility [77], (e) "GelSight" optical sensor consisting of a piece of clear elastomer coated with a reflective membrane senses the shape of the cookie surface [79], (f) finger configurations of the "GelSight" sensor [79]. and even a slip [79]. Another example of an optical tactile sensor with transparent elastomer material is presented in [80], where an LED and a photo-diode distant from each other are placed against a reflecting (contact) planar sur- face. When surface deforms it causes changes in reflected beams. Similar concept is used in the OptoForce sensors [81]. These sensors are based on the use of infrared light to detect deformation of the contact surface, which in turn transforms to force. The forces in three dimensions are estimated from measurements of four photo-diodes that surround one infrared source. The reflecting surface has a semi-spherical ...
Context 2
... sensing is based on optical reflection between mediums with different refractive indices. Conventional optical tactile sensors consist of an array of infrared light- emitting diodes (LEDs) and photo detectors ( Figure 5(a)). The intensity of the light is proportional to the magnitude of the pressure [45]. Optical sensors can also be made sensitive to shear forces, e.g. Yussof et al. [37] devel- oped an optical three-axis tactile sensor for the fingertips of a two-fingered hand ( Figure 5(b)). The sensor consists of 41 sensing elements made from silicon rubber, a light source, an optical fiber-scope, and a current charged cou- pled device (CCD) camera. With the optical tactile sen- sor, the hand is capable of manipulating a light paper box ( Figure 5(c)). Kampmann et al. [76] embedded fiber op- tic sensors to a multi-modal tactile measuring system of a three-fingered robot gripper (Figure 7(d)). Xie et al. developed a flat 3x3 optical tactile sensor array ( Figure 5(d)) with elements of the sensor that are magnetic reso- nance compatible for use in Magnetic Resonance Imaging [77]. Johnson et al. [78] proposed a novel "GelSight" tac- tile sensor to capture surface textures using an elastomer coated with a reflective membrane and a camera with res- olution of up to 2 microns. (Figure 5(e)). A fingertip with a "GelSight" (Figure 5(f)) tactile sensor can measure the surface roughness and texture, the pressure distribution, [45], (b) a structure of optical three-axis tactile sensor: a displacement of a sensing element fixed on flexible finger surface causes changes in light propagation in opto-fibers [37], (c) fingers with the sensitive optical sensors manipulating a light paper box [37], (d) photo of an optical 3 x 3 tactile array with magnetic field compatibility [77], (e) "GelSight" optical sensor consisting of a piece of clear elastomer coated with a reflective membrane senses the shape of the cookie surface [79], (f) finger configurations of the "GelSight" sensor [79]. and even a slip [79]. Another example of an optical tactile sensor with transparent elastomer material is presented in [80], where an LED and a photo-diode distant from each other are placed against a reflecting (contact) planar sur- face. When surface deforms it causes changes in reflected beams. Similar concept is used in the OptoForce sensors [81]. These sensors are based on the use of infrared light to detect deformation of the contact surface, which in turn transforms to force. The forces in three dimensions are estimated from measurements of four photo-diodes that surround one infrared source. The reflecting surface has a semi-spherical ...
Context 3
... sensing is based on optical reflection between mediums with different refractive indices. Conventional optical tactile sensors consist of an array of infrared light- emitting diodes (LEDs) and photo detectors ( Figure 5(a)). The intensity of the light is proportional to the magnitude of the pressure [45]. Optical sensors can also be made sensitive to shear forces, e.g. Yussof et al. [37] devel- oped an optical three-axis tactile sensor for the fingertips of a two-fingered hand ( Figure 5(b)). The sensor consists of 41 sensing elements made from silicon rubber, a light source, an optical fiber-scope, and a current charged cou- pled device (CCD) camera. With the optical tactile sen- sor, the hand is capable of manipulating a light paper box ( Figure 5(c)). Kampmann et al. [76] embedded fiber op- tic sensors to a multi-modal tactile measuring system of a three-fingered robot gripper (Figure 7(d)). Xie et al. developed a flat 3x3 optical tactile sensor array ( Figure 5(d)) with elements of the sensor that are magnetic reso- nance compatible for use in Magnetic Resonance Imaging [77]. Johnson et al. [78] proposed a novel "GelSight" tac- tile sensor to capture surface textures using an elastomer coated with a reflective membrane and a camera with res- olution of up to 2 microns. (Figure 5(e)). A fingertip with a "GelSight" (Figure 5(f)) tactile sensor can measure the surface roughness and texture, the pressure distribution, [45], (b) a structure of optical three-axis tactile sensor: a displacement of a sensing element fixed on flexible finger surface causes changes in light propagation in opto-fibers [37], (c) fingers with the sensitive optical sensors manipulating a light paper box [37], (d) photo of an optical 3 x 3 tactile array with magnetic field compatibility [77], (e) "GelSight" optical sensor consisting of a piece of clear elastomer coated with a reflective membrane senses the shape of the cookie surface [79], (f) finger configurations of the "GelSight" sensor [79]. and even a slip [79]. Another example of an optical tactile sensor with transparent elastomer material is presented in [80], where an LED and a photo-diode distant from each other are placed against a reflecting (contact) planar sur- face. When surface deforms it causes changes in reflected beams. Similar concept is used in the OptoForce sensors [81]. These sensors are based on the use of infrared light to detect deformation of the contact surface, which in turn transforms to force. The forces in three dimensions are estimated from measurements of four photo-diodes that surround one infrared source. The reflecting surface has a semi-spherical ...
Context 4
... sensing is based on optical reflection between mediums with different refractive indices. Conventional optical tactile sensors consist of an array of infrared light- emitting diodes (LEDs) and photo detectors ( Figure 5(a)). The intensity of the light is proportional to the magnitude of the pressure [45]. Optical sensors can also be made sensitive to shear forces, e.g. Yussof et al. [37] devel- oped an optical three-axis tactile sensor for the fingertips of a two-fingered hand ( Figure 5(b)). The sensor consists of 41 sensing elements made from silicon rubber, a light source, an optical fiber-scope, and a current charged cou- pled device (CCD) camera. With the optical tactile sen- sor, the hand is capable of manipulating a light paper box ( Figure 5(c)). Kampmann et al. [76] embedded fiber op- tic sensors to a multi-modal tactile measuring system of a three-fingered robot gripper (Figure 7(d)). Xie et al. developed a flat 3x3 optical tactile sensor array ( Figure 5(d)) with elements of the sensor that are magnetic reso- nance compatible for use in Magnetic Resonance Imaging [77]. Johnson et al. [78] proposed a novel "GelSight" tac- tile sensor to capture surface textures using an elastomer coated with a reflective membrane and a camera with res- olution of up to 2 microns. (Figure 5(e)). A fingertip with a "GelSight" (Figure 5(f)) tactile sensor can measure the surface roughness and texture, the pressure distribution, [45], (b) a structure of optical three-axis tactile sensor: a displacement of a sensing element fixed on flexible finger surface causes changes in light propagation in opto-fibers [37], (c) fingers with the sensitive optical sensors manipulating a light paper box [37], (d) photo of an optical 3 x 3 tactile array with magnetic field compatibility [77], (e) "GelSight" optical sensor consisting of a piece of clear elastomer coated with a reflective membrane senses the shape of the cookie surface [79], (f) finger configurations of the "GelSight" sensor [79]. and even a slip [79]. Another example of an optical tactile sensor with transparent elastomer material is presented in [80], where an LED and a photo-diode distant from each other are placed against a reflecting (contact) planar sur- face. When surface deforms it causes changes in reflected beams. Similar concept is used in the OptoForce sensors [81]. These sensors are based on the use of infrared light to detect deformation of the contact surface, which in turn transforms to force. The forces in three dimensions are estimated from measurements of four photo-diodes that surround one infrared source. The reflecting surface has a semi-spherical ...
Context 5
... sensing is based on optical reflection between mediums with different refractive indices. Conventional optical tactile sensors consist of an array of infrared light- emitting diodes (LEDs) and photo detectors ( Figure 5(a)). The intensity of the light is proportional to the magnitude of the pressure [45]. Optical sensors can also be made sensitive to shear forces, e.g. Yussof et al. [37] devel- oped an optical three-axis tactile sensor for the fingertips of a two-fingered hand ( Figure 5(b)). The sensor consists of 41 sensing elements made from silicon rubber, a light source, an optical fiber-scope, and a current charged cou- pled device (CCD) camera. With the optical tactile sen- sor, the hand is capable of manipulating a light paper box ( Figure 5(c)). Kampmann et al. [76] embedded fiber op- tic sensors to a multi-modal tactile measuring system of a three-fingered robot gripper (Figure 7(d)). Xie et al. developed a flat 3x3 optical tactile sensor array ( Figure 5(d)) with elements of the sensor that are magnetic reso- nance compatible for use in Magnetic Resonance Imaging [77]. Johnson et al. [78] proposed a novel "GelSight" tac- tile sensor to capture surface textures using an elastomer coated with a reflective membrane and a camera with res- olution of up to 2 microns. (Figure 5(e)). A fingertip with a "GelSight" (Figure 5(f)) tactile sensor can measure the surface roughness and texture, the pressure distribution, [45], (b) a structure of optical three-axis tactile sensor: a displacement of a sensing element fixed on flexible finger surface causes changes in light propagation in opto-fibers [37], (c) fingers with the sensitive optical sensors manipulating a light paper box [37], (d) photo of an optical 3 x 3 tactile array with magnetic field compatibility [77], (e) "GelSight" optical sensor consisting of a piece of clear elastomer coated with a reflective membrane senses the shape of the cookie surface [79], (f) finger configurations of the "GelSight" sensor [79]. and even a slip [79]. Another example of an optical tactile sensor with transparent elastomer material is presented in [80], where an LED and a photo-diode distant from each other are placed against a reflecting (contact) planar sur- face. When surface deforms it causes changes in reflected beams. Similar concept is used in the OptoForce sensors [81]. These sensors are based on the use of infrared light to detect deformation of the contact surface, which in turn transforms to force. The forces in three dimensions are estimated from measurements of four photo-diodes that surround one infrared source. The reflecting surface has a semi-spherical ...
Context 6
... sensing is based on optical reflection between mediums with different refractive indices. Conventional optical tactile sensors consist of an array of infrared light- emitting diodes (LEDs) and photo detectors ( Figure 5(a)). The intensity of the light is proportional to the magnitude of the pressure [45]. Optical sensors can also be made sensitive to shear forces, e.g. Yussof et al. [37] devel- oped an optical three-axis tactile sensor for the fingertips of a two-fingered hand ( Figure 5(b)). The sensor consists of 41 sensing elements made from silicon rubber, a light source, an optical fiber-scope, and a current charged cou- pled device (CCD) camera. With the optical tactile sen- sor, the hand is capable of manipulating a light paper box ( Figure 5(c)). Kampmann et al. [76] embedded fiber op- tic sensors to a multi-modal tactile measuring system of a three-fingered robot gripper (Figure 7(d)). Xie et al. developed a flat 3x3 optical tactile sensor array ( Figure 5(d)) with elements of the sensor that are magnetic reso- nance compatible for use in Magnetic Resonance Imaging [77]. Johnson et al. [78] proposed a novel "GelSight" tac- tile sensor to capture surface textures using an elastomer coated with a reflective membrane and a camera with res- olution of up to 2 microns. (Figure 5(e)). A fingertip with a "GelSight" (Figure 5(f)) tactile sensor can measure the surface roughness and texture, the pressure distribution, [45], (b) a structure of optical three-axis tactile sensor: a displacement of a sensing element fixed on flexible finger surface causes changes in light propagation in opto-fibers [37], (c) fingers with the sensitive optical sensors manipulating a light paper box [37], (d) photo of an optical 3 x 3 tactile array with magnetic field compatibility [77], (e) "GelSight" optical sensor consisting of a piece of clear elastomer coated with a reflective membrane senses the shape of the cookie surface [79], (f) finger configurations of the "GelSight" sensor [79]. and even a slip [79]. Another example of an optical tactile sensor with transparent elastomer material is presented in [80], where an LED and a photo-diode distant from each other are placed against a reflecting (contact) planar sur- face. When surface deforms it causes changes in reflected beams. Similar concept is used in the OptoForce sensors [81]. These sensors are based on the use of infrared light to detect deformation of the contact surface, which in turn transforms to force. The forces in three dimensions are estimated from measurements of four photo-diodes that surround one infrared source. The reflecting surface has a semi-spherical ...

Similar publications

Article
Full-text available
Robots are increasingly expected to replace humans in many repetitive and high‐precision tasks, of which surface scanning is a typical example. However, it is usually difficult for a robot to independently deal with a surface scanning task with uncertainties in, for example the irregular surface shapes and surface properties. Moreover, it usually r...

Citations

... Similarly, grid capacitive, piezoelectric, and resistive tactile sensors are also moderately priced, reflecting the complexity of their construction and sensing mechanisms. Neuron memristors and ionic tactile sensors are relatively newer technologies, occupying the moderate cost range as they undergo further development and refinement [5,29,38,39]. However, all of them face limitations including high cost, complexity, sensitivity to environmental factors, integration challenges, and training issues with ML models due to the need for extensive datasets and iterative refinement filtration, etc. ...
... While texture classification, edge detection, and object classification have received considerable attention, hardness classification has been relatively less explored, despite its significant importance. In the field of material science and engineering, machine learning (ML) techniques have proven invaluable in determining material properties using various methods and approaches, such as the Shore scale taxonomy [7,10,12,36,38,39]. ML enables systems to learn and recognize materials based on their hardness using techniques like grasping, indentation, and resistance methods. ...
... These sensors may vary in their single-or multi-dimensional array configurations. Many studies have focused on emulating spike patterns using artificial mechanoreceptors, with some applied in hardness classification tasks [39,[43][44][45]. Currently, many of these advanced sensors are not available in thin film for multiple tasking within robotics applications. ...
Article
Full-text available
Perception is essential for robotic systems, enabling effective interaction with their surroundings through actions such as grasping and touching. Traditionally, this has relied on integrating various sensor systems, including tactile sensors, cameras, and acoustic sensors. This study leverages commercially available tactile sensors for hardness classification, drawing inspiration from the functionality of human mechanoreceptors in recognizing complex object properties during grasping tasks. Unlike previous research using customized sensors, this study focuses on cost-effective , easy-to-install, and readily deployable sensors. The approach employs a qualitative method, using Shore hardness taxonomy to select objects and evaluate the performance of commercial off-the-shelf (COTS) sensors. The analysis includes data from both individual sensors and their combinations analysed using multiple machine learning approaches, and accuracy as the primary evaluation metric was considered. The findings illustrate that increasing the number of classification classes impacts accuracy, achieving 92% in binary classification, 82% in ternary, and 80% in quaternary scenarios. Notably, the performance of commercially available tactile sensors is comparable to those reported in the literature, which range from 50% to 98% accuracy, achieving 92% accuracy with a limited data set. These results highlight the capability of COTS tactile sensors in hardness classification giving accuracy levels of 92%, while being cost-effective and easier to deploy than customized tactile sensors.
... Additionally, the hand is equipped with a network of tendons and muscles that provide strength and allow for precise and coordinated movements [8]. Replicating this complexity in a robotic system requires careful mechanical design and the integration of advanced actuators and sensors [9]. ...
Article
Full-text available
This article presents a study on the neurobiological control of voluntary movements for anthropomorphic robotic systems. A corticospinal neural network model has been developed to control joint trajectories in multi-fingered robotic hands. The proposed neural network simulates cortical and spinal areas, as well as the connectivity between them, during the execution of voluntary movements similar to those performed by humans or monkeys. Furthermore, this neural connection allows for the interpretation of functional roles in the motor areas of the brain. The proposed neural control system is tested on the fingers of a robotic hand, which is driven by agonist–antagonist tendons and actuators designed to accurately emulate complex muscular functionality. The experimental results show that the corticospinal controller produces key properties of biological movement control, such as bell-shaped asymmetric velocity profiles and the ability to compensate for disturbances. Movements are dynamically compensated for through sensory feedback. Based on the experimental results, it is concluded that the proposed biologically inspired adaptive neural control system is robust, reliable, and adaptable to robotic platforms with diverse biomechanics and degrees of freedom. The corticospinal network successfully integrates biological concepts with engineering control theory for the generation of functional movement. This research significantly contributes to improving our understanding of neuromotor control in both animals and humans, thus paving the way towards a new frontier in the field of neurobiological control of anthropomorphic robotic systems.
... However, manual programming of the robot can be a bottleneck in reaching the goal of "minimal human intervention", as it is highly time-consuming [4]. In today's industrial environment, the trend is towards manufacturing components in lower volumes with higher flexibility and adaptability depending on the wishes of the customers [5]. This has led to development of adaptive and reconfigurable robot cells [6], which are suitable to work on several different work- Fig. 1 A conceptual quality inspection setup, in which a robot is scanning the parts which come out of the production on a conveyor belt. ...
Article
Full-text available
Currently, the standard method of programming industrial robots is to perform it manually, which is cumbersome and time-consuming. Thus, it can be a burden for the flexibility of inspection systems when a new component with a different design needs to be inspected. Therefore, developing a way to automate the task of generating a robotic trajectory offers a substantial improvement in the field of automated manufacturing and quality inspection. This paper proposes and evaluates a methodology for automatizing the process of scanning a 3D surface for the purpose of quality inspection using only visual feedback. The paper is divided into three sub-tasks in the same general setting: (1) autonomously finding the optimal distance of the camera on the robot’s end-effector from the surface, (2) autonomously generating a trajectory to scan an unknown surface, and (3) autonomous localization and scan of a surface with a known shape, but with an unknown position. The novelty of this work lies in the application that only uses visual feedback, through the image focus measure, for determination and optimization of the motion. This reduces the complexity and the cost of such a setup. The methods developed have been tested in simulation and in real-world experiments and it was possible to obtain a precision in the optimal pose of the robot under 1 mm in translational, and 0.1∘\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^\circ $$\end{document} in angular directions. It took less than 50 iterations to generate a trajectory for scanning an unknown free-form surface. Finally, with less than 30 iterations during the experiments it was possible to localize the position of the surface. Overall, the results of the proposed methodologies show that they can bring substantial improvement to the task of automatic motion generation for visual quality inspection.
... Camera sensors, which are used very commonly to collect information about objects to be grasp, do not work well in extreme lighting conditions or on reflective objects. On the other hand, special types of tactile sensors used to obtain the fingertip contact point and fingertip force still have problems with robustness, noise, accuracy, and drift, and are difficult to maintain [16], so they are not ready to be used in some uncertain situations (e.g., grasping unknown objects, in unstructured environments, grasping objects that can damage the tactile sensors). Rather, the majority of manipulators/robots in the industry still have robust and widely used sensors attached inside the hand, far from the tip, and necessary information is obtained by transforming the sensor signals. ...
... If not, the counting process continues and ends when there is nothing left to count inside the ANN. After checking all data points in the ANN, if the counted value is still not equal to MinPts, then the counted value is less than MinPts and the fingertip is detected to be in contact with the object (lines [16][17][18]. All cases were checked, so the algorithm ends here (line 19). ...
Article
Full-text available
Detecting contact when fingers are approaching an object and estimating the magnitude of the force the fingers are exerting on the object after contact are important tasks for a multi-fingered robotic hand to stably grasp objects. However, for a linkage-based under-actuated robotic hand with a self-locking mechanism to realize stable grasping without using external sensors, such tasks are difficult to perform when only analyzing the robot model or only applying data-driven methods. Therefore, in this paper, a hybrid of previous approaches is used to find a solution for realizing stable grasping with an under-actuated hand. First, data from the internal sensors of a robotic hand are collected during its operation. Subsequently, using the robot model to analyze the collected data, the differences between the model and real data are explained. From the analysis, novel data-driven-based algorithms, which can overcome noted challenges to detect contact between a fingertip and the object and estimate the fingertip forces in real-time, are introduced. The proposed methods are finally used in a stable grasp controller to control a triple-fingered under-actuated robotic hand to perform stable grasping. The results of the experiments are analyzed to show that the proposed algorithms work well for this task and can be further developed to be used for other future dexterous manipulation tasks.
... Currently, research pertaining to the analysis of stable contact states in anthropomorphic robotic hands can be broadly classified into two primary categories [4][5][6][7]. The first category involves direct acquisition of contact state information by incorporating tactile or other sensory mechanisms onto the surface of robotic hands [8,9]. These sensors have the capability to directly measure important parameters, such as contact force [10], contact area [11] and contact duration [12], between the robotic hands and the target objects, providing intricate data regarding the interaction between the fingers and the objects. ...
Article
Full-text available
Conducting contact state analysis enhances the stability of object grasping by an anthropomorphic robotic hand. The incorporation of soft materials grants the anthropomorphic robotic hand a compliant nature during interactions with objects, which, in turn, poses challenges for accurate contact state analysis. According to the characteristic of the anthropomorphic robotic hand’s compliant contact, a kinetostatic modeling method based on the pseudo-rigid-body model is proposed. It can realize the mapping between contact force and driving torque. On this basis, the stable contact states of the anthropomorphic robotic hand under the envelope grasping mode are further analyzed, which are used to reasonably plan the contact position of the anthropomorphic robotic hand before grasping an object. Experimental results validate the efficacy of the proposed approach during grasping and ensure stable contact in the initial grasping stage. It significantly contributes to enhancing the reliability of the anthropomorphic robotic hand’s ability to securely grasp objects.
... The 2D patterning of ultrathin metallic electrodes and interconnects bonded on a compliant polymer substrate enables their flexibility and their use in current flexible electronics such as flexible displays, wearable human health monitoring devices, and soft robot sensors [1][2][3] . These metal films have thicknesses ranging from hundreds of nanometers to up to a few microns and are pre-designed with 3D wrinkles or 2D serpentine-shaped structures, which can effectively lower the flexural rigidity (~thickness 3 ) and minimize strains in the metal layer. ...
... The 2D patterning of ultrathin metallic electrodes and interconnects bonded on a compliant polymer substrate enables their flexibility and their use in current flexible electronics such as flexible displays, wearable human health monitoring devices, and soft robot sensors [1][2][3] . These metal films have thicknesses ranging from hundreds of nanometers to up to a few microns and are pre-designed with 3D wrinkles or 2D serpentine-shaped structures, which can effectively lower the flexural rigidity (~thickness 3 ) and minimize strains in the metal layer. This geometric feature accommodates large deformation without excessive stress concentration [4][5][6] . ...
Article
Full-text available
Conductive patterned metal films bonded to compliant elastomeric substrates form meshes which enable flexible electronic interconnects for various applications. However, while bottom-up deposition of thin films by sputtering or growth is well-developed for rigid electronics, maintaining good electrical conductivity in sub-micron thin metal films upon large deformations or cyclic loading remains a significant challenge. Here, we propose a strategy to improve the electromechanical performance of nanometer-thin palladium films by in-situ synthesis of a conformal graphene coating using chemical vapor deposition (CVD). The uniform graphene coverage improves the thin film’s damage tolerance, electro-mechanical fatigue, and fracture toughness owing to the high stiffness of graphene and the conformal CVD-grown graphene-metal interface. Graphene-coated Pd thin film interconnects exhibit stable increase in electrical resistance even when strained beyond 60% and longer fatigue life up to a strain range of 20%. The effect of graphene is more significant for thinner films of < 300 nm, particularly at high strains. The experimental observations are well described by the thin film electro-fragmentation model and the Coffin-Manson relationship. These findings demonstrate the potential of CVD-grown graphene nanocomposite materials in improving the damage tolerance and electromechanical robustness of flexible electronics. The proposed approach offers opportunities for the development of reliable and high-performance ultra-conformable flexible electronic devices.
... As the field of robotics progresses towards increasingly intricate anthropomorphic tasks, the development of high-speed and high-resolution soft tactile systems becomes increasingly important [1]. Existing research has focused on a number of bioinspired areas such as soft sensorized hands, skins, and arms [2], [3], [4], [5]. Correspondingly, different methods of sensorization have been introduced, including capacitive, tomography, acoustic, and pneumatic [3], [6], [7], [8]. ...
... The first string for each speed is the raw predicted output text. The repetitions and noise that is seen in these strings match the model we assign in Equation (2). The second string shows the error and length corrected string that we use for measuring the accuracy of the system. ...
Article
Full-text available
Most braille-reading robotic sensors employ a discrete letter-by-letter reading strategy, despite the higher potential speeds of a biomimetic sliding approach. We propose a complete pipeline for continuous braille reading: frames are dynamically collected with a vision-based tactile sensor; an autoencoder removes motion-blurring artefacts; a lightweight YOLO v8 model classifies the braille characters; and a data-driven consolidation stage minimizes errors in the predicted string. We demonstrate a state-of-the-art speed of 315 words per minute at 87.5% accuracy, more than twice the speed of human braille reading. Whilst demonstrated on braille, this biomimetic sliding approach can be further employed for richer dynamic spatial and temporal detection of surface textures, and we consider the challenges which must be addressed in its development.
... Existing tactile sensors based on strain gauges, piezoelectrics, and an array of capacitors make it possible to measure pressure in the indicated ranges with high spatial resolution and speed and can be built into the design of most of the listed devices [6]. Thus, the possibility of creating an e-skin with resistive tactile sensors based on a stretchable transistor 10×10 array on an area of 5×5 mm 2 [7] was shown, which made it possible to measure up to 2x stretching of the sensitive element with a resolution of < 1 mN. ...
... Additionally, the haptic force can represent the level of contact force between the robot and the environment, facilitating safe contact and grasping tasks. Various haptic devices, such as exoskeletons, haptic gloves, or gaming controllers with tactile feedback, enable the operator to feel and manipulate objects remotely [16,23]. Bilateral teleoperation [22] is a form of remote robotic control where there is two-way communication between the operator and the robot, enabling the operator to both send commands to the robot and receive sensory feedback, such as haptic information, from the remote environment. ...
Article
Full-text available
In real-life teleoperation scenarios, the presence of time-varying network delays, particularly in wireless networks, poses significant challenges in maintaining stability in a bilateral teleoperation system. Various approaches have been proposed in the past to address stability concerns; however, these often come at the expense of system transparency. Nevertheless, increasing transparency is crucial in a teleoperation system to enable precise and safe operations, as well as to provide real-time decision-making capabilities for the operator. This paper presents our comprehensive approaches to maximize teleoperation transparency by minimizing system impedance, enhance the wave variable method to handle time-varying network delays, and alleviate non-smooth effects caused by network jitters in bilateral teleoperation. The proposed methodologies take into account the real-world challenges and considerations to ensure the practical applicability and effectiveness of the teleoperation system. Throughout these implementations, passivity analysis is employed to ensure system stability, guaranteeing a reliable and safe teleoperation experience. The proposed approaches were successfully validated in Team Northeastern’s Avatar telepresence system, which achieved the 3rd place in ANA Avatar XPRIZE challenge.
... These sensors can provide robots with feedback about the properties of objects they are manipulating, which can help them to grasp objects more securely and manipulate them more effectively. Despite the recent advances in this field, such tactile sensors still struggle due to their complex integration into robot hands, the difficulty of modeling their behavior and incorporating this information into control algorithms, and the heterogeneity of objects and surfaces encountered in real-world environments (Kappassov et al., 2015). ...
Thesis
Full-text available
In recent years, the field of aerial manipulation has witnessed significant advancements in all its subareas. Among the diverse designs, suspended aerial manipulators, like the DLR Suspended Aerial Manipulator (SAM), emerged as promising solutions due to their inherent advantages in terms of energy efficiency, safety, payload capacity, and redundancy. By harnessing the versatility of complex robots while circumventing the challenges of fully autonomous operation, telemanipulation has historically proven to enable the execution of complex tasks. However, the time-delayed teleoperation of redundant aerial robots like the SAM poses important challenges not yet tackled in the literature. These challenges encompass the system's redundancy, the plurality of actuators, the absence of full-pose measurements, and the limited thrust of the propellers. In that scope, this thesis aims at achieving the following goals: (1) Adapting whole-body control approaches developed for fixed-base robots to aerial robots; (2) enabling time-delayed bilateral teleoperation of redundant aerial robots, allowing the operator to exploit the system redundancy and command multiple tasks. (3) Estimating the full pose of nonlinear systems in the presence of Pfaffian constraints. (4) Addressing limited propeller actuation and developing energy-efficient dynamic maneuver control techniques to reach the full workspace of suspended aerial robots. Initially, we present a control system that compensates for the motion of the platform's center of mass and allows for its camera view to be commanded as a secondary task. Subsequently, a passivity-based framework for whole-body bilateral teleoperation of redundant robots is introduced. To estimate the pose of the system, an observer for multi-body systems subject to Pfaffian constraints is presented. Moreover, a Model Predictive Control (MPC) framework, called EigenMPC, capable of finding sustained nonlinear oscillations is introduced. The proposed teleoperation framework is the first to allow for multi-task and multi-device bilateral teleoperation in the presence of time delays. Additionally, the EigenMPC framework enables limited-actuation systems to exploit their full workspace. The methods presented here are validated in both indoor and outdoor experiments. The results demonstrate an integrated system using information from a variety of sensors and cameras, receiving commands from a remote operator, and using three distinct actuation types to perform manipulation tasks.