Figure 1 - available via license: CC BY
Content may be subject to copyright.
The tactile sensor presented in this article.

The tactile sensor presented in this article.

Source publication
Article
Full-text available
Human skin is capable of sensing various types of forces with high resolution and accuracy. The development of an artificial sense of touch needs to address these properties, while retaining scalability to large surfaces with arbitrary shapes. The vision-based tactile sensor proposed in this article exploits the extremely high resolution of modern...

Contexts in source publication

Context 1
... article describes the design of a sensor (shown in Figure 1) that consists of a camera that tracks the movement of spherical markers within a gel, providing an approximation of the strain field inside the material. This information is exploited to reconstruct the normal external force distribution that acts on the surface of the gel. ...
Context 2
... this soft layer was also cured, the second lid was replaced with the third one, shown (in yellow) in Figure 2d, with an indent that left an empty section around the gel of varying thickness (depending on the section) between 1 mm and 1.5 mm. A black silicone layer (ELASTOSIL ® RT 601 RTV-2, mixing ratio 25:1, shore hardness 10A) was then poured through the cavity on the third lid. Figure 4 shows a schematic cross-sectional view of the three silicone layers, and an example of the resulting tactile sensor is shown in Figure 1. The stiff layer, which was poured first, served as a base for the softer materials that were placed on top of it, and as a spacer between the camera and the region of interest. ...
Context 3
... the resulting image was converted to gray-scale. An example of the masked image and the computed dense optical flow are shown in Figure 10. As in the case of the sparse keypoint tracking, the resulting flow was represented as tuples of magnitude and angle, in this case for each pixel. ...
Context 4
... multi-output deep neural network (DNN) provides a single function approximation to the underlying map, which exploits the interrelations among the different inputs and outputs. Similar to the work in [6], the approach proposed in this article presents a feedforward DNN architecture (see Figure 11), with three fully connected hidden layers of width 1600, which apply a sigmoid function to their outputs. The input data to the network were the 2m (sparse or dense) optical flow features described in Section 5, while the output vectors provided an estimate of the normal force applied at each of the n bins the tactile sensor surface was divided into, as described in Section 4. The architecture weights were trained with RMSProp with Nesterov momentum (known in the literature as Nadam, see [33]), with training batches of 100 samples and a learning rate of 0.001. ...
Context 5
... example of the prediction of the normal force distribution is shown in Figure 12. The learning architecture was evaluated for different values of m and n, and for features based on sparse and dense optical flow. ...
Context 6
... learning architecture was evaluated for different values of m and n, and for features based on sparse and dense optical flow. The results, greatly outperforming the authors' previous work in [6], are shown in Figure 13. ...
Context 7
... shown in Figure 13b, the RMSE mc decreased by increasing the number of averaging regions in the image, and was lower for the dense optical flow case, considering the same number of surface bins. Estimating the force distribution with a finer spatial resolution slightly degraded the performance according to this metric, and might require a different network architecture and more training data, due to the higher dimension of the predicted output. ...
Context 8
... Resulting d loc Figure 13. The plots show the resulting metrics (defined in Equations (13), (17) and (16)) for various values of image bins m and surface bins n, and for dense and sparse optical flow (OF) features. ...

Similar publications

Conference Paper
Full-text available
Object recognition has been extensively explored in the computer vision literature, and over the last few years the results in this field have sometimes even surpassed human performance. One of the main reasons for this success is the growing number of images available to generate training datasets for machine learning. In comparison to computer vi...
Conference Paper
Full-text available
Despite the visionary tenders that emerging technologies bring to education, modern learning environments such as MOOCs or Webinars still suffer from adequate affective awareness and effective feedback mechanisms, often leading to low engagement or abandonment. Artificial Conversational Agents hold the premises to ease the modern learner’s isolatio...

Citations

... [31][32][33] The choice of material and manufacturing in VBTS sensing for robotic applications can significantly impact the sensor response for the targeted application. [34] For instance, using capacitive with acrylic tips, [35] photometric stereo with gel elastomer, [36] or marker tracking with soft Elastosil [37] can highly affect the sensing feedback. Hence, elastomers are highly favorable in VBTS due to their flexibility, [38] deformability to capture precise details, [39,40] and environmental friendly. ...
Article
Full-text available
This work pioneers the application of direct ink writing (DIW) to fabricate elastomeric additively manufactured vision based tactile sensor (VBTS). DIW cuts down the fabrication time by 76%, allowing design precise control and reducing the complexities of the process compared to the state‐of‐the‐art (SOTA) molding techniques. Successful fabrication of DIW sensor is verified in three stages. Firstly, the mechanical characteristics of the DIW sensor are at par with those of SOTA molded Ecoflex in terms of depth of compression, compression rate, and the number of cycles. Secondly, using robotic pose estimation as a demonstration, the force enables deformation in the DIW sensor shows comparable normality estimation performance to that of the SOTA Ecoflex with a mean absolute error of less than 0.6°. Thirdly, finite element analysis (FEA) of DIW and SOTA Ecoflex sensors using Yeoh model shows similar stress and strain distributions as another evidence of DIW deformability and durability signaling sensor's successful fabrication.
... In [50], [51], the authors use FEM to estimate deformations of a BioTac [52] sensor and synthesize simulated data by learning a latent space representation. Other papers have applied FEM to their custom-built soft tactile sensors [53]- [55]. Due to their high computational cost, FEM is typically not well-suited for data-driven approaches like DRL, which we use, unless some simplifying assumptions can be made [56]. ...
Preprint
Full-text available
The advent of tactile sensors in robotics has sparked many ideas on how robots can leverage direct contact measurements of their environment interactions to improve manipulation tasks. An important line of research in this regard is grasp force control, which aims to manipulate objects safely by limiting the amount of force exerted on the object. While prior works have either hand-modeled their force controllers, employed model-based approaches, or not shown sim-to-real transfer, we propose a model-free deep reinforcement learning approach trained in simulation and then transferred to the robot without further fine-tuning. We, therefore, present a simulation environment that produces realistic normal forces, which we use to train continuous force control policies. A detailed evaluation shows that the learned policy performs similarly or better than a hand-crafted baseline. Ablation studies prove that the proposed inductive bias and domain randomization facilitate sim-to-real transfer. Code, models, and supplementary videos are available on https://sites. google.com/view/rl-force-ctrl
... The measurement of the contact force uses resistive and strain gauge elements [3], [4], [5], [6], [7], capacitive sensing [8], [9], [10], [11], [7], the principle of magnetic sensing [12], ferroelectric [13], triboelectric [14], and opto-resistive [15], [16] sensors. Recently, vision-based haptic sensors [17], [18], [19], [20], [21], [22], [23], [24] have evinced various solutions towards tactile sensing. The vision-based haptic sensor (VHS) is a class of optical sensors widely applied to robotic perception for environmental probing towards kinaesthetic and tactile inputs. ...
... To this end, a multidimensional force platform with flexible interface is designed, fabricated and evaluated. Our platform features a deep-learning informed sensor design to decouple the multidimensional plantar force vectors [24]. ...
Article
Full-text available
The multidimensional force platform with a flexible interface for simultaneous assessment of plantar pressure and shear stresses has become highly anticipated for early diagnostics of diabetic foot ulcer (DFU). Robust detection of such multidimensional forces remains challenging, meanwhile an intrinsically flexible sensor interface is deemed required to improve detection accuracy by adapting to plantar foot structure and its surface characteristics. This study proposes a novel silicone rubber-based vision sensor design for multidimensional force assessment under the foot. We employ Finite Element method (FEM) to determine the optimal range of the platform for plantar force detection relevant to gait, offering a comprehensive yet efficient design scheme. After fabrication, a force decoupling model was established by training the convolutional neural network of U-Net with 15,400 datasets with multidimensional force-optical flow data as collected by the customized calibration system. Following calibration, the platform can achieve simultaneous measurement of multidimensional plantar stresses in real-time with an accuracy of 0.09 N for pressure with a range up to 25 N/cm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> , and 0.06 N for shear stress with a range up to 10 N/cm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> , respectively. The platform proposed in this work is the first neural network-informed force platform with a flexible interactive interface, multidimensional force detection capability, and low manufacturing cost. In the future, it is expected to become an efficient tool for further research into the biomechanical etiology of the DFU in both research and clinical settings.
... The GelSlim sensor used Young's modulus and Poisson's ratio to obtain the stiffness matrix of the gel pad, and determined the applied force based on the standard Finite Element Method (FEM) [9]. Sferrazza the Boussinesq-Cerritti equation, which could be described as a matrix mapping containing these two parameters [10]. Zhang et al. obtained the inverse mapping model from the measured displacement to the estimated distributed force using FEM analysis software, which relies on the set of and to derive the total stiffness matrix [11]. ...
... Consider a suitable set of parameters: 1 = 0.1MPa, 2 = 0.2MPa , 1 = 2 = 0.4, 1 = 50cm , 2 = 5cm . Then we have (dimensionless): 1 = 0.1232 , 2 = 0.0616 , 1 = 7.5319 × 10 −4 , 2 = 3.7760 × 10 −3 , and it supports the approximate relationship in Eq. (10). In this case, ...
Preprint
Full-text available
For elastomer-based tactile sensors, represented by visuotactile sensors, routine calibration of mechanical parameters (Young’s modulus and Poisson’s ratio) has been shown to be important for force reconstruction. However, the reliance on existing in-situ calibration methods for accurate force measurements limits their cost-effective and flexible applications. This article proposes a new in-situ calibration scheme that relies only on comparing contact deformation. Based on the detailed derivations of the normal contact and torsional contact theories, we designed a simple and low-cost calibration device, EasyCalib, and validated its effectiveness through extensive finite element analysis. We also explored the accuracy of EasyCalib in the practical application and demonstrated that accurate contact distributed force reconstruction can be realized based on the mechanical parameters obtained. EasyCalib balances low hardware cost, ease of operation, and low dependence on technical expertise and is expected to provide the necessary accuracy guarantees for wide applications of visuotactile sensors in the wild. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
... It has the following advantages: simple hardware configuration, low cost, simultaneous measurement of normal and shear forces, and it has been successfully applied to various robot tasks, including peg-in-hole insertion, cable manipulation [17], [4], [18]. Therefore, marker-based visuotactile sensors are widely studied, and many type of sensors have been developed, including GelForce [19], TacTip [20], [21], GelStereo [22], Tac3D [23], and the sensor by ETH [24]. In this work, we mainly verified our proposed protocol on our marker-based visuotactile sensors based on GelSight [25]. ...
Article
Full-text available
Visuotactile sensors can provide rich contact information, having great potential in contact-rich manipulation tasks with reinforcement learning (RL) policies. Sim2Real technique tackles the challenge of RL's reliance on a large amount of interaction data. However, most Sim2Real methods for manipulation tasks with visuotactile sensors rely on rigid-body physics simulation, which fails to simulate the real elastic deformation precisely. Moreover, these methods do not exploit the characteristic of tactile signals for designing the network architecture. In this paper, we build a general-purpose Sim2Real protocol for manipulation policy learning with marker-based visuotactile sensors. To improve the simulation fidelity, we employ an FEM-based physics simulator that can simulate the sensor deformation accurately and stably for arbitrary geometries. We further propose a novel tactile feature extraction network that directly processes the set of pixel coordinates of tactile sensor markers and a self-supervised pre-training strategy to improve the efficiency and generalizability of RL policies. We conduct extensive Sim2Real experiments on the peg-in-hole task to validate the effectiveness of our method. And we further show its generalizability on additional tasks including plug adjustment and lock opening. The protocol, including the simulator and the policy learning framework, will be open-sourced for community usage.
... In the camera-based method, the image of the deforming sensing surface captured by the camera is used to extract the tactile features (see Figure 2f). Usually, the deforming surface (soft sensing surface) has markers or pins arranged on its inner surface, whose displacements are recorded by the camera [32,[56][57][58][59][60][61]. Apart from using pins or markers, the imprints of the external object on the sensing skin are captured by the camera in certain sensors [62][63][64][65][66][67][68]. ...
Article
Full-text available
Tactile sensing plays a pivotal role in achieving precise physical manipulation tasks and extracting vital physical features. This comprehensive review paper presents an in-depth overview of the growing research on tactile-sensing technologies, encompassing state-of-the-art techniques, future prospects, and current limitations. The paper focuses on tactile hardware, algorithmic complexities, and the distinct features offered by each sensor. This paper has a special emphasis on agri-food manipulation and relevant tactile-sensing technologies. It highlights key areas in agri-food manipulation, including robotic harvesting, food item manipulation, and feature evaluation, such as fruit ripeness assessment, along with the emerging field of kitchen robotics. Through this interdisciplinary exploration, we aim to inspire researchers, engineers, and practitioners to harness the power of tactile-sensing technology for transformative advancements in agri-food robotics. By providing a comprehensive understanding of the current landscape and future prospects, this review paper serves as a valuable resource for driving progress in the field of tactile sensing and its application in agri-food systems.
... The soft material is equipped with a pattern that allows the image sensor to capture deformations clearly. Such patterns include small colored markers [10,11], randomly dispersed fluorescent markers [12], and colored LED patterns [13]. Compared to other tactile sensors, VBTSs do not require much instrumentation; only the imaging device and a source of illumination are required to be instrumented and maintained. ...
Article
Full-text available
Vision-based tactile sensors (VBTSs) have become the de facto method for giving robots the ability to obtain tactile feedback from their environment. Unlike other solutions to tactile sensing, VBTSs offer high spatial resolution feedback without compromising on instrumentation costs or incurring additional maintenance expenses. However, conventional cameras used in VBTS have a fixed update rate and output redundant data, leading to computational overhead.In this work, we present a neuromorphic vision-based tactile sensor (N-VBTS) that employs observations from an event-based camera for contact angle prediction. In particular, we design and develop a novel graph neural network, dubbed TactiGraph, that asynchronously operates on graphs constructed from raw N-VBTS streams exploiting their spatiotemporal correlations to perform predictions. Although conventional VBTSs use an internal illumination source, TactiGraph is reported to perform efficiently in both scenarios (with and without an internal illumination source) thus further reducing instrumentation costs. Rigorous experimental results revealed that TactiGraph achieved a mean absolute error of 0.62∘ in predicting the contact angle and was faster and more efficient than both conventional VBTS and other N-VBTS, with lower instrumentation costs. Specifically, N-VBTS requires only 5.5% of the computing time needed by VBTS when both are tested on the same scenario.
... Multispectral illumination from below allows to derive deformation depth and thus a detailed 2.5d geometry of the reflective surface. For marker-based sensing, high-contrast points are painted on the clear surface of the elastomer [6,32,35], on the interior of an opaque hull for TacTip sensors [36,41] or colored balls are directly encapsulated in the soft material [14,29,46]. These sensors have been used extensively for tactile sensing in robotic applications, mounting the sensor on the end effector to measure gripping force and detect slipping. ...
Preprint
Full-text available
Cameras provide a vast amount of information at high rates and are part of many specialized or general-purpose devices. This versatility makes them suitable for many interaction scenarios, yet they are constrained by geometry and require objects to keep a minimum distance for focusing. We present the LensLeech, a soft silicone cylinder that can be placed directly on or above lenses. The clear body itself acts as a lens to focus a marker pattern from its surface into the camera it sits on. This allows us to detect rotation, translation, and deformation-based gestures such as pressing or squeezing the soft silicone. We discuss design requirements, describe fabrication processes, and report on the limitations of such on-lens widgets. To demonstrate the versatility of LensLeeches, we built prototypes to show application examples for wearable cameras, smartphones, and interchangeable-lens cameras, extending existing devices by providing both optical input and output for new functionality.
... Such a problem becomes especially prominent when the sensing area is constantly moving during the rolling motion. In these situations, techniques using marker tracking with nearest temporal matching (27,35) or optical flow (46,47) tend to generate erroneous results. Instead, we adopted Random Optimization to reliably track marker displacement during rolling by maximizing for marker flow smoothness (48), which assumes that nearby markers move with similar velocities, as well as minimizing marker mismatch between frames. ...
Preprint
Full-text available
Manipulation of objects within a robot's hand is one of the most important challenges in achieving robot dexterity. The "Roller Graspers" refers to a family of non-anthropomorphic hands utilizing motorized, rolling fingertips to achieve in-hand manipulation. These graspers manipulate grasped objects by commanding the rollers to exert forces that propel the object in the desired motion directions. In this paper, we explore the possibility of robot in-hand manipulation through tactile-guided rolling. We do so by developing the Tactile-Reactive Roller Grasper (TRRG), which incorporates camera-based tactile sensing with compliant, steerable cylindrical fingertips, with accompanying sensor information processing and control strategies. We demonstrated that the combination of tactile feedback and the actively rolling surfaces enables a variety of robust in-hand manipulation applications. In addition, we also demonstrated object reconstruction techniques using tactile-guided rolling. A controlled experiment was conducted to provide insights on the benefits of tactile-reactive rollers for manipulation. We considered two manipulation cases: when the fingers are manipulating purely through rolling and when they are periodically breaking and reestablishing contact as in regrasping. We found that tactile-guided rolling can improve the manipulation robustness by allowing the grasper to perform necessary fine grip adjustments in both manipulation cases, indicating that hybrid rolling fingertip and finger-gaiting designs may be a promising research direction.