Figure 2 - uploaded by Vijay Kakani
Content may be subject to copyright.
Problem statement of vision-based tactile sensor mechanism for the estimation of contact location and force distribution using deep learning: Data acquisition and training and inference stage.

Problem statement of vision-based tactile sensor mechanism for the estimation of contact location and force distribution using deep learning: Data acquisition and training and inference stage.

Source publication
Article
Full-text available
This work describes the development of a vision-based tactile sensor system that utilizes the image-based information of the tactile sensor in conjunction with input loads at various motions to train the neural network for the estimation of tactile contact position, area, and force distribution. The current study also addresses pragmatic aspects, s...

Contexts in source publication

Context 1
... collected data then has to be paired with the stereo camera samples (which captures the deformation of the elastic body) in terms of right and left images. This collective data has to be properly handled and pre-processed to train the regression network for better prediction of contact position and force distribution, as shown in Figure 2. ...
Context 2
... corresponding results are reported in Table 6. Figure 20 illustrates the estimation errors w.r.t circular tool with ground truth (GT = 78.54 mm 2 ), square tool with (GT = 100.00 ...

Similar publications

Article
Full-text available
It is important for the study of circuit breaker opening capacity to know how to accurately measure the movement characteristics of the actuator. In order to measure the motion characteristics of the actuator accurately, an image processing method based on LabVIEW platform is proposed in this paper to build a system for detecting the motion speed o...

Citations

... However, it is difficult to obtain the pose of the occluded object as well as the contact information during the manipulation. As shown in Fig. 1, with the advancement of optical imaging techniques, researchers have combined visual perception with tactile perception, which uses cameras to detect the deformation of the sensor surface [28]. Based on this mechanism, various genius visuotactile sensors have been designed, such as fingertip tactile sensors like GelSight [5], Digit [8], robotic arms [29], [30], and robot feet [31]. ...
... Before performing force calibration, researchers need to build a platform containing force sensors, probes, and precision slips. As shown in Fig. 9, calibration systems can be divided into manual calibration [28] and automatic calibration [1]. When the amount of collected data is small, the manual calibration system can meet the requirements, but when the amount of data collected is large, the automatic calibration system becomes necessary. ...
... Calibration System. (a) Manual calibration system[28]. (b) Automatic calibration system[1]. ...
Preprint
Tactile sensors, which provide information about the physical properties of objects, are an essential component of robotic systems. The visuotactile sensing technology with the merits of high resolution and low cost has facilitated the development of robotics from environment exploration to dexterous operation. Over the years, several reviews on visuotactile sensors for robots have been presented, but few of them discussed the significance of signal processing methods to visuotactile sensors. Apart from ingenious hardware design, the full potential of the sensory system toward designated tasks can only be released with the appropriate signal processing methods. Therefore, this paper provides a comprehensive review of visuotactile sensors from the perspective of signal processing methods and outlooks possible future research directions for visuotactile sensors.
... Therefore, optimization of VBTS for achieving high-accuracy systems is essential to ensure a wider successful implementation. A VBTS comprises crucial components including a contact module in the form of an elastomer skin, sensing elements (markers), support structure, vision system, and illumination system [7,[32][33][34]. Several VBTS designs are documented in literature and most of them employ flat or hemispherical elastomer skins, although exceptions exist, such as cylindrical tactile sensors tailored for endoscopy applications. ...
Article
Full-text available
Vision-Based Tactile Sensors (VBTS) play a key role in enhancing the accuracy and efficiency of machining operations in robotic-assisted precision machining systems. Equipped with VBTS, these systems offer contact-based measurements, which are essential in machining accurate components for industries such as aerospace, automotive, medical devices, and electronics. This paper presents a novel approach to virtual prototyping of VBTS, specifically in perpendicularity measurements using Computer-Aided Design (CAD) generation of VBTS designs, Finite Element Analysis (FEA) simulations, and Sim2Real deep learning to achieve VBTS with high precision measurements. The virtual prototyping approach enables an understanding of the contact between VBTS with different designs and machined surfaces in terms of contact module shape, thickness, markers’ density. Additive manufacturing was employed to fabricate the molds of VBTS contact module, followed by experimental validation of the robotic arm to confirm the effectiveness of the optimized VBTS design. The results show that deviation from the hemispherical shape reduces the data quality captured by the camera, hence increasing the prediction errors. Additionally, reducing the thickness of the contact module enhances the precision of perpendicularity measurements. Importantly increasing markers’ distribution density significantly enhances the accuracy of up to 92 markers at which above it the rate of improvement becomes less pronounced. An VBTS with height of 20 mm, thickness of 2 mm, and 169 markers was found to be within the stringent perpendicularity standards of the aerospace manufacturing industry of 0.58◦ as a root mean square error, and 1.64◦ as a max absolute error around the roll and pitch axis of rotation. The established virtual prototyping methodology can be transferred to a wide variety of elastomer-based sensors.
... Some tactile sensors can measure force distribution across three axes [77]. Tactile sensing is employed in various applications, such as slip detection [78,79], object manipulation [80], branch detection [81], determining contact position [82] [83], detecting contact events [84], occlusion detection [85,86], and classification tasks [87]. Accurate force distribution combined with slip detection is vital for effective object manipulation. ...
... Additionally, Kakani et al. [82] developed a vision-based tactile sensor system that combines sensor image data with various input loads and motions. This system, integrated with a deep learning model adapted from the VGG-16 architecture, estimates tactile contact position, area, and force distribution. ...
Preprint
Full-text available
The integration of artificial intelligence with sensor technologies has revolutionized precision agriculture, offering unprecedented opportunities for enhancing crop management and productivity. This review focuses on the latest advancements in vision-based tactile sensors, a technology at the forefront of this transformation. By combining tactile data with vision-based techniques, these sensors provide a more comprehensive understanding of the agricultural environment. We investigate thoroughly the role of deep learning approaches in refining the functionality of these sensors, highlighting their potential to significantly improve the accuracy and efficiency of agricultural operations. The paper also explores the importance of specialized datasets in training deep neural networks for vision-based tactile applications, assessing the current landscape and identifying gaps in the available data. Through a thorough examination of the current state of the art, this review paper aims to shed light on the potential of AI-driven tactile sensing in precision agriculture and outline future research directions to further advance this field.
... Famous image analysis Python libraries such as Open CV (Bradski and Kaehler, 2008), Scikit learn (Varoquaux et al., 2015), Numpy (Harris et al., 2020) and ImageJ (Abràmoff et al., 2004) toolkits were employed to perform such analysis. CV and AI algorithms have been used in many fields ranging from computational chemistry, autonomous vehicle, and robotics to agriculture and food industry for image analysis (Kakani et al., 2021(Kakani et al., , 2020c(Kakani et al., , 2020b(Kakani et al., , 2020aSenthil Kumar, 2017). Corresponding SEM samples were explored from various perspectives to realize surface edge intricacy and 3D morphology for roughness features and toxicity. ...
... In recent times, a newer and more improved development involves a fingertip sensor [33,34] that combines visual and tactile capabilities. This sensor features point markers positioned on the surface of the fingertip to enhance the precision of determining contact position and force. ...
Article
Full-text available
It is essential to detect pressure from a robot’s fingertip in every direction to ensure efficient and secure grasping of objects with diverse shapes. Nevertheless, creating a simple-designed sensor that offers cost-effective and omnidirectional pressure sensing poses substantial difficulties. This is because it often requires more intricate mechanical solutions than when designing non-omnidirectional pressure sensors of robot fingertips. This paper introduces an innovative pressure sensor for fingertips. It utilizes a uniquely designed dynamic focusing cone to visually detect pressure with omnidirectional sensitivity. This approach enables cost-effective measurement of pressure from all sides of the fingertip. The experimental findings demonstrate the great potential of the newly introduced sensor. Its implementation is both straightforward and uncomplicated, offering high sensitivity (0.07 mm/N) in all directions and a broad pressure sensing range (up to 40 N) for robot fingertips.
... Recent advancements in deep learning techniques based on neural networks such as CNN and GANs have paved the way for generating synthetic datasets in major fields such as autonomous vehicles [1,21,38], biometrics [29], video surveillance [12,19], machine vision [20,27,30]. Additionally, the need for image-based forensics [31,50] and biometric applications is increasing with the advent of technology. ...
Article
Full-text available
This study proposes a method for generating ID preserving synthetic iris database. The proposed method can be applied in the generation of a synthetic iris database for various iris recognition tasks. This work successfully combines the main idea of generative adversarial learning, segmentation, and identification to solve real-world problems. The method produces synthetic iris images from the segmentation masks given ID information. The segmentation mask, iris pose, is devised from the input image by using a segmentation network. By doing this, the ID-preserving iris synthesis method generates an unlimited number of synthetic iris images by processing the provided input images. The accuracy of the generated iris images is validated by measuring top-1, top-5, and Area under the Curve (AUC). The SegNet and IDNet performance was evaluated using class accuracy in terms of precision, recall, and F1-score alongside the computation model complexity. This study exhibits ease of use, compatibility, and accuracy in preserving ID information for the generated synthetic images compared to the other baseline methods. Evaluation results prove the efficacy of this work by comparing the randomly generated iris images using the current study alongside existing methods.
... Among them, the physical characteristics of polydimethylsiloxane (PDMS) are close to the human skin. It is often used to make bionic skin, which is suitable for making a tactile sensor matrix [4][5][6][7][8]. However, the existing piezoresistive, piezoelectric, and capacitive sensors have poor compatibility with flexible materials [9], which has a certain impact on the measurement results. ...
Article
Full-text available
Aiming at the problems of lateral force interference and non-uniform strain of robot fingers in the process of pressure tactile sensing, a flexible tactile sensor with a square hole structure based on fiber Bragg grating (FBG) is proposed in this paper. Firstly, the optimal embedding depth of the FBG in the sensor matrix model was determined by finite element simulation. Secondly, according to the size of the finger knuckle and the simulation analysis based on the pressure tactile sensor element for the robot finger, the square hole structure was designed, and the overall dimensions of the sensing element and size of the square hole were determined. Thirdly, the FBG was embedded in the polydimethylsiloxane (PDMS) elastic matrix to make a sensor model, and the tactile sensor was fabricated. Finally, the FBG pressure tactile sensing system platform was built by using optical fiber sensing technology, and the experiment of the FBG tactile sensor was completed through the sensing system platform. Experimental results show that the tactile sensor designed in this paper has good repeatability and creep resistance. The sensitivity is 8.85 pm/N, and the resolution is 0.2 N. The loading sensitivity based on the robot finger is 27.3 pm/N, the goodness of fit is 0.996, and the average value of interference in the sensing process is 7.63%, which is lower than the solid structure sensor. These results verify that the sensor can effectively reduce the lateral force interference and solve the problem of non-uniform strain and has high fit with fingers, which has a certain application value for the research of robot pressure tactile intelligent perception.
... For example, the objective of the work presented in [9] and [10] was to estimate the contact region by subtracting contact and non-contact tactile images. In [11] and [12], the authors used vision-based tactile sensors with markers to estimate the contact region. In the first case, it was obtained by detecting and grouping the moving markers, while in the second it was estimated through the use of a Gaussian regression model. ...
... In contrast, in this work, we use tactile segmentation to calculate the rotation angle of an object when slippage occurs during manipulation tasks. Although our work is inspired by the aforementioned works, the main differences lie in the fact that we use DIGIT sensors without markers [11], [12], which do not produce depth information [14], [15], and state-ofthe-art segmentation neural networks, which are more robust than subtracting operations [9], [10] and vanilla CNN [16], and whose training is more stable when compared to the training of GANs [17]. Moreover, in this paper, our methods are trained in order to segment several contact geometries of real household objects, while in [16] the authors trained their CNNs using basic 3D printed geometries, and in [17] they trained an RL agent to follow contours and surfaces by segmenting edges. ...
Article
Full-text available
When carrying out robotic manipulation tasks, objects occasionally fall as a result of the rotation caused by slippage. This can be prevented by obtaining tactile information that provides better knowledge on the physical properties of the grasping. In this letter, we estimate the rotation angle of a grasped object when slippage occurs. We implement a system made up of a neural network with which to segment the contact region and an algorithm with which to estimate the rotated angle of that region. This method is applied to DIGIT tactile sensors. Our system has additionally been trained and tested with our publicly available dataset which is, to the best of our knowledge, the first dataset related to tactile segmentation from non-synthetic images to appear in the literature, and with which we have attained results of 95% and 90% as regards Dice and IoU metrics in the worst scenario. Moreover, we have obtained a maximum error of $\approx 3^{\circ }$ when testing with objects not previously seen by our system in 45 different lifts. This, therefore, proved that our approach is able to detect the slippage movement, thus providing a possible reaction that will prevent the object from falling.
... Innovations in artificial neural networks [1,2] and vision-based technologies [3][4][5] have spawned intelligent applications in a variety of fields [6][7][8][9][10][11], despite some limitations in low-cost computing [12]. Due to developments in spiking neural networks (SNNs), neuromorphic processing units inspired by the brain have gained popularity [13]. ...
Article
Full-text available
This paper investigates the performance of deep convolutional spiking neural networks (DCSNNs) trained using spike-based backpropagation techniques. Specifically, the study examined temporal spike sequence learning via backpropagation (TSSL-BP) and surrogate gradient descent via backpropagation (SGD-BP) as effective techniques for training DCSNNs on the field programmable gate array (FPGA) platform for object classification tasks. The primary objective of this experimental study was twofold: (i) to determine the most effective backpropagation technique, TSSL-BP or SGD-BP, for deeper spiking neural networks (SNNs) with convolution filters across various datasets; and (ii) to assess the feasibility of deploying DCSNNs trained using backpropagation techniques on low-power FPGA for inference, considering potential configuration adjustments and power requirements. The aforementioned objectives will assist in informing researchers and companies in this field regarding the limitations and unique perspectives of deploying DCSNNs on low-power FPGA devices. The study contributions have three main aspects: (i) the design of a low-power FPGA board featuring a deployable DCSNN chip suitable for object classification tasks; (ii) the inference of TSSL-BP and SGD-BP models with novel network architectures on the FPGA board for object classification tasks; and (iii) a comparative evaluation of the selected spike-based backpropagation techniques and the object classification performance of DCSNNs across multiple metrics using both public (MNIST, CIFAR10, KITTI) and private (INHA_ADAS, INHA_KLP) datasets.
... This data format is equivalent to a monochrome image of 6 × 22 pixels, with the pressure intensity corresponding to the value of a pixel. These similarities between tactile and image data have already been exploited in previous works [35,36]. Thus, the framework uses OpenCV [37] library (version 3.1) and embeds common computer vision algorithms. ...
Article
Full-text available
Recent developments in robotics have enabled humanoid robots to be used in tasks where they have to physically interact with humans, including robot-supported caregiving. This interaction-referred to as physical human-robot interaction (pHRI)-requires physical contact between the robot and the human body; one way to improve this is to use efficient sensing methods for the physical contact. In this paper, we use a flexible tactile sensing array and integrate it as a tactile skin for the humanoid robot HRP-4C. As the sensor can take any shape due to its flexible property, a particular focus is given on its spatial calibration, i.e., the determination of the locations of the sensor cells and their normals when attached to the robot. For this purpose, a novel method of spatial calibration using B-spline surfaces has been developed. We demonstrate with two methods that this calibration method gives a good approximation of the sensor position and show that our flexible tactile sensor can be fully integrated on a robot and used as input for robot control tasks. These contributions are a first step toward the use of flexible tactile sensors in pHRI applications.