Fig 3 - uploaded by Nicholas James Pestell
Content may be subject to copyright.
Sensor mounted as an end-effector on the robot arm used for experiments. The two stimuli used are also shown: circle (right) and non-uniform volute (left). 

Sensor mounted as an end-effector on the robot arm used for experiments. The two stimuli used are also shown: circle (right) and non-uniform volute (left). 

Source publication
Article
Full-text available
Tactile sensing is required for human-like control with robotic manipulators. Multimodality is an essential component for these tactile sensors, for robots to achieve both the perceptual accuracy required for precise control, as well as the robustness to maintain a stable grasp without causing damage to the object or the robot itself. In this study...

Contexts in source publication

Context 1
... onto the stimulus edge and records 5 frames with the sensor held statically at the bottom of the tap (∼2 mm of compression of sensing surface). Over the 5 frames, pixel values for all N dims = 900 are recorded yielding a total of 4500 sensor values. The stimulus used was a 3D-printed circular object (diameter = 107 mm) with a 90 • edge, shown in Fig. ...
Context 2
... demonstrate the benefits gained from dual-modality we design a task where the robot uses both modes. Using Tip- B, the robot must follow the edge of a previously unseen stimulus, trained only with the circle shape used for previous experiments. A non-uniform volute shape (see Fig. 3) is used, where the radius of curvature varies from 20 mm to 50 ...

Similar publications

Article
In this research, a novel hybrid material bone implant manufacturing through the integration of two materials using additive manufacturing (AM) technology is proposed. Biomimetic application can manufacture high strength biomechanical implants with optimised geometry and mass. The combination of polymers allows a significant leap in the development...
Article
Currently, one of the most promising strategies in bone tissue engineering focuses on the development of biomimetic scaffolds. Ceramic-based scaffolds with favorable osteogenic ability and mechanical property are promising candidates for bone repair. Three-dimensional (3D) printing is an additive manufacturing technique, which allows the fabricatio...
Preprint
Full-text available
Recent researches show that material microstructural designs mimicking biomaterials offer an effective way to design tougher materials. To reveal the underlying microstructure-toughness relationship, five bioinspired material microstructures are investigated experimentally, including the brick-and-mortar, cross-lamellar, concentric hexagonal, and r...
Article
Full-text available
Introduction The Barrow Biomimetic Spine project is an ongoing effort to develop a three-dimensional (3D)-printed synthetic spine model with high anatomical and biomechanical fidelity to human tissue. The purpose of this study was to evaluate the biomechanical performance of an L4-L5 3D-printed synthetic spine model in a lordotic correction test af...

Citations

... In [111], the tactile sensor was used in a system capable of working in real time with sub-second latency. This tactile exploration can be applied to medical fields for tissue perception and surface injury inspection in the patient's body as well as palpation [112]. The research team then developed the modular version of the tactile sensor [113]. ...
Preprint
p>Minimally invasive robotic interventions have highlighted the need to develop efficient techniques to measure forces applied to the soft tissues. Since the last decade, many scholars have focused on micro-scale and macro-scale robotic manipulations. Early articles used the model of soft tissue mathematically and tracked the displacement of the contour of the object in the vision system to provide the corresponding force to the user. Lack of knowledge of different materials and the computational complexity led to a transition from model-based to learning-based approaches to interpret the relation between object deformations, extracted from the vision system, and the real forces applied to the object. The dramatic growth of machine learning techniques and its integration with computer vision has brought novel learning-based visual data processing methods to the area. The application of the image-based force estimation methods in a controlled medical intervention has also received significant attention in the last five years. A decent number of surveys have been published on micromanipulation in recent years, especially for cell microinjection. However, the state of the art in meso- and macro-scale medical robotic interventions has not been reviewed. The aim and contribution of this paper are to fill the stated gap by reviewing the recent advances in image-based force estimation in robotic interventions. The survey shows that learning-based force estimation methods are growing significantly by using deep learning-based methods. The survey will encourage researchers and surgeons to apply learning-based algorithms to real-time medical and health-related operations.</p
... Another remarkable notice is that in [28] the authors use high price tactile sensor (TacTip) to classify circle shapes (110mm diameter) using a histogram likelihood model to analyze a 30x30 pixel image (900 pixels) that collected by following the shape contours wherein proposed method 680 pixels only constructed per each tested shape by using lower price sensor (Takktil) and high similar accuracy have been reached in recognizing the shape. Moreover, in [29], despite using 5x9 pressure elements in a sensor to classify simple six shapes, an overall accuracy of 97.5% has been calculated which is nearly similar to our calculated one (97.09%) ...
... This sensitive haptic sense helps us to perform dexterous operations [2,3]. Studies have attempted to endow robotics with intelligent haptic perception that is similar to the human haptic system [4][5][6]. It is of great importance because proper identification is the prerequisite of reliable handling. ...
Article
Full-text available
Haptic cognitive models are used to map the physical stimuli of texture surfaces to subjective haptic cognition, providing robotic systems with intelligent haptic cognition to perform dexterous manipulations in a manner that is similar to that of humans. Nevertheless, there is still the question of how to extract features that are stable and reflect the biological perceptual characteristics as the inputs of the models. To address this issue, a semi-supervised sparse representation method is developed to predict subjective haptic cognitive intensity in different haptic perceptual dimensions of texture surfaces. We conduct standardized interaction and perception experiments on textures that are part of common objects in daily life. Effective data cues sifting, perceptual filtering, and semi-supervised feature extraction steps are conducted in the process of sparse representation to ensure that the source data and features are complete and effective. The results indicate that the haptic cognitive model using the proposed method performs well in fitting and predicting perceptual intensity in the perceptual dimensions of “hardness,” “roughness,” and “slipperiness” for texture surfaces. Compared with previous methods, such as models using multilayer regression and hand-crafted features, the use of standardized interaction, cue sifting, perceptual filtering, and semi-supervised feature extraction could greatly improve the accuracy by improving the completeness of collected data, the effectiveness of features, and simulations of some physiological cognitive mechanisms. The improved method can be implemented to improve the performance of the haptic cognitive model for texture surfaces, and can also inspire research on intelligent cognition and haptic rendering systems.
... Within the deep skin layers, a population of RA mechanoreceptors called Pacinian corpuscles responds to high-frequency vibration (peak sensitivity ∼250 Hz). A partial mimicry of their function can be attained by using the TacTip with a high frame-rate (kHz) camera [37], [38]; however, questions remain about whether this approach to vibration sensing is effective or even biomimetic, since it images fast pin movement rather than vibration in the deeper gel. In our view, a biomimetic counterpart of the vibration sense would be to embed a pressure sensor in the gel of the TacTip, like the vibration modality of the BioTac [17]. ...
... Over the same period, a perception method based on a histogram likelihood model over the marker displacements was used [40]. For control, the position of an object feature was predicted, such as a cylinder or edge relative to the sensor [34], applied to rolling manipulation [69], [71], [89] or tapping around 2D contours [38], [46], [72]. However, those methods did not extend well from discrete to continuous and dynamic environments. ...
Preprint
Full-text available
div>Reproducing the capabilities of the human sense of touch in machines is an important step in enabling robot manipulation to have the ease of human dexterity. A combination of robotic technologies will be needed, including soft robotics, biomimetics and the high-resolution sensing offered by optical tactile sensors. This combination is considered here as a SoftBOT (Soft Biomimetic Optical Tactile) sensor. This article reviews the BRL TacTip as a prototypical example of such a sensor. Topics include the relation between artificial skin morphology and the transduction principles of human touch, the nature and benefits of tactile shear sensing, 3D printing for fabrication and integration into robot hands, the application of AI to tactile perception and control, and the recent step-change in capabilities due to deep learning. This review consolidates those advances from the past decade to indicate a path for robots to reach human-like dexterity. </div
... Within the deep skin layers, a population of RA mechanoreceptors called Pacinian corpuscles responds to high-frequency vibration (peak sensitivity ∼250 Hz). A partial mimicry of their function can be attained by using the TacTip with a high frame-rate (kHz) camera [37], [38]; however, questions remain about whether this approach to vibration sensing is effective or even biomimetic, since it images fast pin movement rather than vibration in the deeper gel. In our view, a biomimetic counterpart of the vibration sense would be to embed a pressure sensor in the gel of the TacTip, like the vibration modality of the BioTac [17]. ...
... Over the same period, a perception method based on a histogram likelihood model over the marker displacements was used [40]. For control, the position of an object feature was predicted, such as a cylinder or edge relative to the sensor [34], applied to rolling manipulation [69], [71], [89] or tapping around 2D contours [38], [46], [72]. However, those methods did not extend well from discrete to continuous and dynamic environments. ...
Preprint
Full-text available
div>Reproducing the capabilities of the human sense of touch in machines is an important step in enabling robot manipulation to have the ease of human dexterity. A combination of robotic technologies will be needed, including soft robotics, biomimetics and the high-resolution sensing offered by optical tactile sensors. This combination is considered here as a SoftBOT (Soft Biomimetic Optical Tactile) sensor. This article reviews the BRL TacTip as a prototypical example of such a sensor. Topics include the relation between artificial skin morphology and the transduction principles of human touch, the nature and benefits of tactile shear sensing, 3D printing for fabrication and integration into robot hands, the application of AI to tactile perception and control, and the recent step-change in capabilities due to deep learning. This review consolidates those advances from the past decade to indicate a path for robots to reach human-like dexterity. </div
... In [111], the tactile sensor was used in a system capable of working in real time with sub-second latency. This tactile exploration can be applied to medical fields for tissue perception and surface injury inspection in the patient's body as well as palpation [112]. The research team then developed the modular version of the tactile sensor [113]. ...
Article
Full-text available
Minimally invasive robotic interventions have highlighted the need to develop efficient techniques to measure forces applied to the soft tissues. Since the last decade, many scholars have focused on micro-scale and macro-scale robotic manipulations. Early articles used the model of soft tissue mathematically and tracked the displacement of the contour of the object in the vision system to provide the corresponding force to the user. Lack of knowledge of different materials and the computational complexity led to a transition from model-based to learning-based approaches to interpret the relation between object deformations, extracted from the vision system, and the real forces applied to the object. The dramatic growth of machine learning techniques and its integration with computer vision has brought novel learning-based visual data processing methods to the area. The application of the image-based force estimation methods in a controlled medical intervention has also received significant attention in the last five years. A decent number of surveys have been published on micromanipulation in recent years, especially for cell microinjection. However, the state of the art in meso- and macro-scale medical robotic interventions has not been reviewed. The aim and contribution of this paper are to fill the stated gap by reviewing the recent advances in image-based force estimation in robotic interventions. The survey shows that learning-based force estimation methods are growing significantly by using deep learning-based methods. The survey will encourage researchers and surgeons to apply learning-based algorithms to real-time medical and health-related operations.
... In amongst the myriad different sensing technologies, spanning both force and tactile sensors, optical sensors using cameras have a lot to recommend them. This type of sensor can provide a high spatial resolution and, crucially, can sense multiple modalities (such as contact force, contact surface geometry, and hardness) from analyzing a single captured image [14][15][16][17][18][19]. Vision-based tactile sensors usually use soft and deformable materials as the sensing medium. ...
Article
Full-text available
Force and tactile sensing has experienced a surge of interest over recent decades, as it conveys a range of information through physical interaction. Tactile sensors aim to obtain tactile information (including pressure, texture etc.). However, current tactile sensors have difficulties in accurately acquiring force signals with regards to magnitude and direction. This is because tactile sensors such as the GelSight sensor estimate shear forces from discrete markers embedded in a compliant sensor interface, employing image processing techniques-the resultant force errors are sizeable. This paper presents a novel design for a force/tactile sensor, namely the F-TOUCH (Force and Tactile Optically Unified Coherent Haptics) sensor, representing an advancement on current vision-based tactile sensors. In addition to acquiring geometric features at a high spatial resolution, our sensor incorporates a number of deformable structural elements allowing us to measure translational and rotational force and torque along six axes with high accuracy. The proposed sensor contains three key components: a coated elastomer layer acting as the compliant sensing medium, spring mechanisms acting as de-formable structural elements, and a camera for image capture. The camera records the deformation of the structural elements as well as the distortion of the compliant sensing medium, concurrently acquiring force and tactile information. The sensor is calibrated with the use of a commercial ATI force sensor. An experimental study shows that the F-TOUCH sensor outperforms the GelSight sensor with regard to its capabilities to sense force signals and capturing the geometry of the contacted object.
... While the results we get are similar to those which use vibrometry, studying the vibrations of the artificial skin could determine whether the sensor does behave similarly to human skin. One approach is to modify the sensor with a higher frame rate camera which could allow us to see the highfrequency deformations of the skin; another approach would be to add another high-frequency tactile sensing modality to the TacTip [7]. ...
Chapter
Full-text available
Ultrasonic phased arrays are used to generate mid-air haptic feedback, allowing users to feel sensations in mid-air. In this work, we present a method for testing mid-air haptics with a biomimetic tactile sensor that is inspired by the human fingertip. Our experiments with point, line, and circular test stimuli provide insights on how the acoustic radiation pressure produced by the ultrasonic array deforms the skin-like material of the sensor. This allows us to produce detailed visualizations of the sensations in two-dimensional and three-dimensional space. This approach provides a detailed quantification of mid-air haptic stimuli of use as an investigative tool for improving the performance of haptic displays and for understanding the transduction of mid-air haptics by the human sense of touch.
... Phosphorescent pins 2) Tip Choice: For the tip, a mini-TacTip [28] is used as this design has been verified before, and is a good compromise between being small enough for the robot to carry and being big enough to have a large tactile sensing area (tip diameter is 27mm at base of black dome). The hemispherical design was chosen here, instead of a flatter design, because, whilst the hemisphere makes less contact with a flat surface when the leg is perpendicular to the surface, the hemisphere retains roughly the same size contact area with surfaces rotated up to approximately 45°. ...
Preprint
Little research into tactile feet has been done for walking robots despite the benefits such feedback could give when walking on uneven terrain. This paper describes the development of a simple, robust and inexpensive tactile foot for legged robots based on a high-resolution biomimetic TacTip tactile sensor. Several design improvements were made to facilitate tactile sensing while walking, including the use of phosphorescent markers to remove the need for internal LED lighting. The usefulness of the foot is verified on a quadrupedal robot performing a beam walking task and it is found the sensor prevents the robot falling off the beam. Further, this capability also enables the robot to walk along the edge of a curved table. This tactile foot design can be easily modified for use with any legged robot, including much larger walking robots, enabling stable walking in challenging terrain.
... However, the information is scant and monotonous. With an effort to design and build tactile sensors [56], [57], haptics stepped into researcher's horizon to acquire diverse information, such as material characteristics, pressure, vibration, and pain, assisting robots to perceive the surroundings [58]. ...
Article
Full-text available
The development of the hardware design of robot body and external sensors enables the improvement of multiple perception and control methods for efficient interactions with humans and unstructured environments. An important application area for integrating humans and robots with such advanced interactions is physical Human-Robot Interaction (pHRI). This aspect represents high demands for real-time perception about the dynamic unity formed by robot, human, environment and operating objects. In order to reflect the current progress of pHRI, this paper provides a comprehensive review of multi-modal fusion state-of-the-art, exploring its classifications, feasibilities, challenges and existing methodologies based on the prospects that pHRI is spatio-temporal sharing, multi-modal and multi-task. Finally, several directions of research are recommended based on the discussion of multi-modal fusion methods.