Conference PaperPDF Available

Haptic discrimination of material properties by a robotic hand

Authors:

Abstract and Figures

One of the key aspects of understanding human intelligence is to investigate how humans interact with their environment. Performing articulated movement and manipulation tasks in a constantly changing environment, have proven more difficult than expected. The difficulties of robot manipulation are in part due to the unbalanced relation between vision and haptic sensing. Most robots are equipped with high resolution cameras, which images are processed by well established computer vision algorithms such as color segmentation, motion detection, edge detection, etc. However, the majority of robots have very limited haptic capabilities. This paper presents our attempt to overcome this difficulties by: (a) using a tendon driven robotic hand with rich dynamical movements and (b) covering the hand with a set of haptic sensors on the palm and the fingertips, the sensors are based on a simplified version of an artificial skin with strain gauges and PVDF (polyvinylidene fluoride) films. The results show that if the robotic hand actively explores different objects using the exploratory procedures: tapping and squeezing, material properties such as hardness and texture can be used to discriminate haptically between different objects.
Content may be subject to copyright.
Haptic discrimination of material properties by a robotic hand
Shinya Takamuku
, Gabriel G
´
omez
∗∗
, Koh Hosoda
, and Rolf Pfeifer
∗∗
Dept. of Adaptive Machine Systems, Graduate School of Engineering, Osaka University, Japan
{shinya.takamuku, hosoda}@ams.eng.osaka-u.ac.jp
∗∗
Articial Intelligence Laboratory
Dept. of Information Technology, University of Zurich, Switzerland
Andreasstrasse 15, CH-8050 Zurich, Switzerland
{gomez, pfeifer}@i.unizh.ch
Abstract One of the key aspects of understanding human
intelligence is to investigate how humans interact with their en-
vironment. Performing articulated movement and manipulation
tasks in a constantly changing environment, have proven more
difcult than expected. The difculties of robot manipulation are
in part due to the unbalanced relation between vision and haptic
sensing. Most robots are equipped with high resolution cameras,
which images are processed by well established computer vision
algorithms such as color segmentation, motion detection, edge
detection, etc. However, the majority of robots have very limited
haptic capabilities.
This paper presents our attempt to overcome this difculties
by: (a) using a tendon driven robotic hand with rich dynamical
movements and (b) covering the hand with a set of haptic
sensors on the palm and the ngertips, the sensors are based
on a simplied version of an articial skin with strain gauges
and PVDF (polyvinylidene uoride) lms. The results show that
if the robotic hand actively explores different objects using the
exploratory procedures: tapping and squeezing, material prop-
erties such as hardness and texture can be used to discriminate
haptically between different objects.
Index Terms Haptic perception, tapping, squeezing, arti-
cial skin, tendon driven robotic hand.
I. INTRODUCTION
Humans rely on multiple sensory modalities to estimate
environmental properties. Both, the eyes and the hands
can provide information about an object’s shape, but in
contrast to vision, the hands are especially adapted to
perceive material properties such as texture, temperature,
and weight. Manual haptic perception is the ability to gather
information about objects by using the hands. The tactile
properties of objects are processed by the somatosensory
system, which uses information from receptors that respond
to touch and vibration, body movement, temperature, and
pain [1]. A baby’s earliest explorations of himself and his
environment are made using his sense of touch ( [2], [3]),
his hands and mouth being the principal exploratory tools.
[4] presented evidence for genuine haptic perception of
material properties (including weight differences) by infants
as young as three months old, although they did not explore
the objects with hand movements specic to the properties
in question (which is due to their inability to move the
ngers independently). The stimuli consisted of objects
which were all held with the same kind of grip, in order that
they could not be discriminated on the basis of hand posture
and in total darkness to avoid visual perception or cross
modal associations, in other words to make the exploration
purely haptic.
Infants acquire ability to haptically detect various object
properties asynchronously, rst to size or volume (3 months),
followed by texture, temperature, and hardness (6 months),
weight (6-9 months) [5].
Haptic exploration is a task-dependent activity, and when
people seek information about a particular object property,
such as size, temperature, hardness, or texture, they perform
stereotyped exploratory hand movements or “exploratory
procedures” [6]. The exploratory procedures used by adults
to explore haptic properties are lateral motion (a rubbing
action) for detecting texture; pressure (squeezing or poking)
for encoding hardness; static contact for temperature; lifting
to perceive the weight; enclosure for volume and gross
contour information; and contour following for precise
contour information as well as global shape [7].
In terms of robotic manipulation, a fully integrated
force/tactile sensor has been developed by [8] for the “mac-
hand”, as well as a technique to compute the pressure centroid
and the associated ellipsoid during contact (see [9]). A
dynamical model for viscoelastic pads useful to quantitatively
characterize the behavior of materials used to cover robotic
hands was presented by [10], a control approach, exploiting
the relation between the stiffness and the applied load, was
proposed by [11] in order to arbitrarily change the overall
stiffness of a robot hand. Using the robot “Obrero”, [12]
has demonstrated the feasibility of the “tapping” exploration
procedure by using a nger to tap objects and use the
produced sound to recognize them. A self organized map was
(a)
(b)
Fig. 1. Experimental setup. (a) tendon driven robotic hand (b) articial
skin with strain gauges and PVDF (polyvinylidene uoride) lms sensors
mounted on the ngertips and the palm. The hand is exploring a piece of
paper.
used by [13] to enable the “Babybot” to categorize 6 different
objects plus the no-object condition. The network encoded
not only the shape but intrinsic properties like weight.
In this paper we present our work with a tendon driven robot
hand, the “Yokoi hand”, developed by [14] covered with
a set of haptic sensors on the palm and the ngertips, the
sensors are based on a simpliedversionofanarticial skin
with strain gauges and PVDF (polyvinylidene uoride) lms
sensors developed by [15], [ 16]. In the following section we
describe the tendon driven mechanism of our robotic hand
and the position, type, and number of sensors covering it.
In section III we specify the robot’s task. In section IV we
explain the different exploratory procedures. Then we present
some experimental results as well as a discussion and future
work.
II. R
OBOTIC SETUP
Our robotic platform can be seen in Fig. 1a. The tendon
driven robot hand is partly built from elastic, exible and de-
formable materials (see [14]). The hand applies an adjustable
(a) Fingertip
(b) Palm
Fig. 2. Sketches of the articial skins
power mechanism develop by [17].
The robotic hand has 18 degrees of freedom (DOF) that are
driven by 13 servomotors and has been equipped with three
types of sensors: ex/bend, angle, and haptic.
A. Bending and angle sensors
For the ex/bend sensor, the bending angle is proportional
to its resistance and responds to a physical range between
straight and a 90 degree bend, they are placed on every
nger as position sensors. Angle sensors in all the joints are
provided by potentiometers.
B. Haptic sensors
Haptic sensors are based on strain gauges and PVDF
(polyvinylidene uoride) lms sensors. The haptic sensors
are located in the palm and in the ngertips as can be seen
in Fig. 1. The articial skin is made by putting strain gauges
and PVDF (polyvinylidene uoride) lms between two layers
of silicon rubber. The strain gauges detect the strain and
work in a similar way as the Merkel cells in the human
skin, whereas the PVDF lms detect the velocity of the
strain and corresponds to the Meissner corpuscles (MCs, see
[18]) in the human skin. The PVDF lms are expected to
be more sensitive to the transient/small strain than the strain
gauges. The shape of the articial skin is modied to ttothe
robotic hand. Sketches of the articial skins for the ngers
(a) Fingertip
(b) Palm
Fig. 3. Pictures of the articial skins
and the palm are shown in Fig. 2a and Fig. 2b respectively.
Photographes for the corresponding skins are shown in Fig.
3a and Fig. 3b. In each ngertip there is one strain gauge
and one PVDF lm. In the palm there are four strain gauges
and four PVDF lms.
C. Robot control
We control the robot hand using a TITech
TM
SH2 con-
troller. The controller produces up to 16 PWM (pulse with
modulation) signals for the servomotors and acquire the val-
ues from the bending and angle sensors. The motor controller
receives the commands through an USB port.
Sensor signals from the strain gauges and the PVDF lms are
amplied and fed to a host computer via a CONTEC
TM
data
acquisition card at a rate of 1.6 KHz.
III. R
OBOT TASK
The robot performs two exploratory procedures with the
ring nger, namely: squeezing, and tapping over seven dif-
ferent objects of different material properties as well as the
no-object condition, each object was taped on the palm of
the robot hand and explored during one minute. The objects
can be seen in Fig. 4. The robotic hand performing a typical
experiment can be seen in Fig. 1b.
Fig. 4. Objects with different material properties.
Fig. 5. Schematic of a nger. Positions of the ngertip (red marker), the
middle hinge (green marker) and the base (blue marker)
IV. EXPLORATORY PROCEDURES
The robotic hand actively explores different objects using
the exploratory procedures: squeezing and tapping.
A. Squeezing
For the squeezing exploratory procedure, we drove both
motors controlling the ring nger to the maximum angular
position, thus making the nger to close over the palm as
much as possible and squeezing the object, as described in
1.
ang
i
(t)=maxAng
i
(1)
0 50 100 150 200
20
0
20
40
60
80
100
120
Fig. 6. Kinematics of the robotic hand.) sinusoidal position control.
Upper plot is the angle between the middle hinge and the nger tip (angL),
whereas the lower plot corresponds to the angle between the base of the
nger and the middle hinge (angU).
Where:
ang
i
is the target angular positions of the i-th nger
joint (ang
L
and ang
U
)
maxAng
i
is the maximum angular position of the i-th
nger joint
B. Tapping
The tapping exploratory procedure was achieved by a
sinusoidal position control of the ring nger that can be
described as follows:
ang
i
(t)=A
i
sin(ωt + φ)+B
i
(2)
Where:
ang
i
is the target angular positions of the i-th nger
joint (ang
L
and ang
U
).
A
i
is the amplitude of the oscillation for the i-th nger
joint
B
i
is the set point of the oscillation (i.e., 60 degrees)
for the i-th nger joint
ω is the frequency of the oscillation.
φ is the phase delay between the oscillation of the nger
joints
Increasing and decreasing the position of the servo motors
produced the pulling of the tendons, which made the ngers
move back and forth, tapping the object over the palm. Fig.
6 shows the result of the motion of the nger during the
no-object condition.
V. R
ESULTS
Fig. 7 shows a time series of the results of typical squeez-
ing experiments during 10 sec. There are 4 strain gauges
3 3.2 3.4 3.6 3.8 4
x 10
4
500
1000
1500
2000
time (msec)
values of strain gages in the palm
Fig. 7. Squeezing Exploratory procedure. Output of the four strain gauges
located on the palm of the robotic hand while exploring a piece of tissue
(yellow) and a circuit breadboard (light green) during 10 sec.
3 3.05 3.1 3.15 3.2
x 10
4
0
500
1000
1500
2000
2500
3000
3500
4000
time (msec)
value of PVDF film in the finger
Fig. 8. Tapping exploratory procedure. Output of the PVDF lm located on
the ngertip of the ring nger while exploring the objects in Fig. 4 during
2 sec.
on the palm: the yellow lines represent the output of the
strain gauges during the squeezing of a piece of tissue (soft
material), whereas the light green lines represent the output of
the strain gauges while squeezing a circuit breadboard (hard
material). As can be seen the squeezing exploratory proce-
dure can be used to distinguish the compliance (hardness of
an object).
Fig. 8 shows a time series of a typical tapping experiment
during 2 sec. There is one strain gauge and one PVDF lm
in the sensor located on the ngertip of the ring nger. The
color correspondence is as follows: no-object condition(red),
breadboard (light green), card (blue), lego (light blue), paper
(pink), tissue (yellow), cloth (black), bubble (white). The
larger output corresponds to the moment when the nger
taps over the object and the smaller output corresponds to the
moment when the nger is pull back and leaves the object.
0
2
4
6
8
0 2 4 6 8
units
units
breadboard
bubble
card
lego
none
paper
tissue
Fig. 9. Classication of seven objects plus the no-object condition by a
SOM using the “Squeezing” exploratory procedure.
Fig. 10. Classication of seven objects plus the no-object condition by a
SOM using the “Tapping” exploratory procedure.
A self organizing map was used to categorize the
experimental data, we used the software package SOM
PAK
version 3.1 [19], where the topology was a hexagonal lattice,
the neighboring function type used was bubble.
Fig. 9 shows the result for the “squeezing” exploratory
procedure, the input for the SOM was the average value
of the four strain gauges in the palm and the size of the
SOM was 8x8. As can be seen soft objects (e.g., tissue)
are located in the middle, whereas hard objects (eg., circuit
breadboard and lego) are distributed on the sides of the map.
For “tapping” the input was the sensory sequence of the
PVDF lm located in the nger and the results are shown in
Fig. 10. The number of samples per object was 50 and the
size of the SOM was increased to 16x16. As can be seen, the
soft objects are located at the left and the upper part while
the hard objects are distributed in the middle. The no-object
condition is in the bottom right part. In other words, moving
leftandupinthemapisbeingsoft.
VI. D
ISCUSSION AND FUTURE WORK
The research on haptic perception is certainly important
to development and learning and our results show how
the exploratory procedures: squeezing and tapping, can be
used to recognize the hardness of an object as well as the
no-object condition. Our robotic approach gives objective
representations to the categories that we obtain from such
exploratory behaviors. Such categories should be useful to
discuss in more detail the haptic experience reported by
infants or adults while exploring objects using the same type
of exploratory procedures.
In the future we will introduce a “rubbing” exploratory
procedure to encode texture and some friction features. Visual
and proprioceptive information will be also included in order
to make the categorization more robust. At the moment the
articial skin used in the palm is very at and smooth, we
would like to make it more closely to a human palm, with a
concave shape and more rough in order to increase the contact
surface with the objects and the possibility to stimulate the
sensors easier.
A
CKNOWLEDGMENT
This research was supported by the Swiss National Science
Foundation project: “Embryogenic Evolution: From Simula-
tion to Robotic Application, Nr.200021-109864/1 and the
European Project: “ROBOTCUB: ROBotic Open Architec-
ture Technology for Cognition Understanding and Behavior,
Nr. IST-004370.
R
EFERENCES
[1] J. H. Kaas, “The functional organization of somatosensory cortex in
primates, Ann Anat, vol. 175, p. 509518, 1993.
[2] A. Streri and F
´
eron, “The development of haptic abilities in very
young infants: From perception to cognition, Infant Behavior and
Development, vol. 28, pp. 290–304, 2005.
[3] M. Molina and F. Jouen, “Manual cyclical activity as an exploratory
tool in neonates, Infant Behavior and Development, vol. 27, pp. 42–
53, 2004.
[4] T. Striano and E. Bushnell, “Haptic perception of material properties
by 3-month-old infants, Infant Behavior & Development, vol. 28, p.
266289, 2005.
[5] E. W. Bushnell and J. P. Boudreau, “Motor development and the mind:
The potential role of motor abilities as a determinant of aspects of
perceptual development, Child Development, vol. 64, no. 4, pp. 1005–
1021, August 1993.
[6] S. Lederman and R. Klatzky, “Hand movements: A window into haptic
object recognition, Cognitive Psychology, vol. 19, p. 342368, 1987.
[7] S. J. Lederman and R. L. Klatzky, “Haptic exploration and object repre-
sentation, in Vision and Action: The Control of Grasping, M. Goodale,
Ed. New Jersey: Ablex, 1990, pp. 98–109.
[8] G. Cannata and M. Maggiali, An embedded tactile and force sensor
for robotic manipulation and grasping, in 5th IEEE-RAS International
Conference on Humanoid Robots, 2005, pp. 80–85.
[9] ——, “Processing of tactile/force measurements for a fully embedded
sensor, in the IEEE International conference on multisensor fusion
and integration for intelligent systems, Heidelberg, Germany, 2006, p.
160 166.
[10] L. Biagiotti, C. Melchiorri, P. Tiezzi, and G. Vassura, “Modelling and
identication of soft pads for robotic hands, in IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS 2005), 2005.
[11] L. Biagiotti, P. Tiezzi, G. Vassura, and C. Melchiorri, “Modelling and
controlling the compliance of a robotic hand with soft nger-pads,
Tracts in AdvancedRobotics, vol. 18, p. 5575, 2005.
[12] E. Torres-Jara, L. Natale, and P. Fitzpatrick, “Tapping into touch, in
Fifth International Workshop on Epigenetic Robotics, Nara, Japan. July
22-24, 2005, 2005.
[13] L. Natale, G. Metta, and G. Sandini, “Learning haptic representation
of objects. in In International Conference on Intelligent Manipulation
and Grasping, Genoa - Italy, July 2004.
[14] H. Yokoi, A. Hernandez Arieta, R. Katoh, W. Yu, I. Watanabe,
and M. Maruishi, Mutual Adaptation in a Prosthetics Application In
Embodied articial intelligence. Lecture Notes in Computer Science.,
F. Iida, R. Pfeifer, L. Steels, and Y. Kuniyoshi, Eds. Springer, ISBN:
3-540-22484-X, 2004, vol. 3139.
[15] Y. Tada, K. Hosoda, and M. Asada, “Sensing ability of anthropo-
morphic ngertip with multi-modal sensors, in 8th Conference on
Intelligent Autonomous Systems (IAS8), March 2004., pp. 1005–1012.
[16] K. Hosoda, Y. Tada, and M. Asada, Anthropomorphic robotic soft n-
gertip with randomly distributed receptors, Robotics and Autonomous
Sys- tems, vol. 54, no. 2, pp. 104–109, 2006.
[17] Y. Ishikawa, W. Yu, H. Yokoi, and Y. Kakazu, “Research on the double
power mechanism of the tendon driven robot hand, The Robotics
Society of Japan, pp. 933–934, 1999.
[18] J. N. Hoffmann, A. G. Montag, and N. J. Dominy, “Meissner corpus-
cles and somatosensory acuity: The prehensile appendages of primates
and elephants, THE ANATOMICAL RECORD PART A, vol. 281A, p.
11381147, 2004.
[19] T. Kohonen, J. Hynninen, J. Kangas, and J. Laaksonen, “Som
pak:
The self-organizing map program package. Report A31. Helsinki
University of Technology, Laboratory of Computer and Information
Science, Espoo, Finland., Tech. Rep., 1996.
... The contribution for physical property extraction from objects is 2-fold: Physical properties estimation: We saw earlier that the methods from the literature on tool selection and substitute selection do estimate one or more physical properties of objects. Besides these two applications, approaches for estimating various physical properties of objects such as rigidity, shape, texture, size, etc. have been proposed for applications such as object recognition/categorization, grasping, and manipulation (Takamuku et al., 2007;Kraft et al., 2009;Sinapov et al., 2009;Spiers et al., 2016;Wu et al., 2016;Kaboli et al., 2017;Kim et al., 2018). In this work, we propose light-weight estimation methods for rigidity, hollowness, size, flatness and roughness, requiring a minimal experimental set-up. ...
Article
Full-text available
Conceptual knowledge about objects is essential for humans, as well as for animals, to interact with their environment. On this basis, the objects can be understood as tools, a selection process can be implemented and their usage can be planned in order to achieve a specific goal. The conceptual knowledge, in this case, is primarily concerned about the physical properties and functional properties observed in the objects. Similarly tool-use applications in robotics require such conceptual knowledge about objects for substitute selection among other purposes. State-of-the-art methods employ a top-down approach where hand-crafted symbolic knowledge, which is defined from a human perspective, is grounded into sensory data afterwards. However, due to different sensing and acting capabilities of robots, a robot's conceptual understanding of objects (e.g., light/heavy) will vary and therefore should be generated from the robot's perspective entirely, which entails robot-centric conceptual knowledge about objects. A similar bottom-up argument has been put forth in cognitive science that humans and animals alike develop conceptual understanding of objects based on their own perceptual experiences with objects. With this goal in mind, we propose an extensible property estimation framework which consists of estimations methods to obtain the quantitative measurements of physical properties (rigidity, weight, etc.) and functional properties (containment, support, etc.) from household objects. This property estimation forms the basis for our second contribution: Generation of robot-centric conceptual knowledge. Our approach employs unsupervised clustering methods to transform numerical property data into symbols, and Bivariate Joint Frequency Distributions and Sample Proportion to generate conceptual knowledge about objects using the robot-centric symbols. A preliminary implementation of the proposed framework is employed to acquire a dataset comprising six physical and four functional properties of 110 household objects. This Robot-Centric dataSet (RoCS) is used to evaluate the framework regarding the property estimation methods and the semantics of the considered properties within the dataset. Furthermore, the dataset includes the derived robot-centric conceptual knowledge using the proposed framework. The application of the conceptual knowledge about objects is then evaluated by examining its usefulness in a tool substitution scenario.
... It is hard to learn this map with only visual data. Haptic feedback from physical interactions can be informative for object classification [6,10,11] and inferring object properties such as haptic adjectives [12], rigidity [13], elasticity [14], hardness [15], and compliance [16,17]. Previous work has also shown that combining visual and haptic modalities can help towards inferring global haptic mapping [18] and learning multi-modal representations [19] during manipulation. ...
Preprint
Autonomous robot-assisted feeding requires the ability to acquire a wide variety of food items. However, it is impossible for such a system to be trained on all types of food in existence. Therefore, a key challenge is choosing a manipulation strategy for a previously unseen food item. Previous work showed that the problem can be represented as a linear contextual bandit on visual information. However, food has a wide variety of multi-modal properties relevant to manipulation that can be hard to distinguish visually. Our key insight is that we can leverage the haptic information we collect during manipulation to learn some of these properties and more quickly adapt our visual model to previously unseen food. In general, we propose a modified linear contextual bandit framework augmented with post hoc context observed after action selection to empirically increase learning speed (as measured by cross-validation mean square error) and reduce cumulative regret. Experiments on synthetic data demonstrate that this effect is more pronounced when the dimensionality of the context is large relative to the post hoc context or when the post hoc context model is particularly easy to learn. Finally, we apply this framework to the bite acquisition problem and demonstrate the acquisition of 8 previously unseen types of food with 21% fewer failures across 64 attempts.
... In combination with a machine learning algorithm, this allowed them to classify surfaces with different textures. A similar approach was conducted by Takamuku et al., which allowed them to distinguish haptically between materials of different hardness and texture [58]. A different approach for the fabrication of an artificial finger was chosen with the BioTac ® (SynTouch, Los Angeles, CA, USA) [59]. ...
Article
Full-text available
The touch-feel sensation of product surfaces arouses growing interest in various industry branches. To entangle the underlying physical and material parameters responsible for a specific touch-feel sensation, a new measurement system has been developed. This system aims to record the prime physical interaction parameters at a time, which is considered a necessary prerequisite for a successful physical description of the haptic sensation. The measurement setup enables one to measure the dynamic coefficient of friction, the macroscopic contact area of smooth and rough surfaces, the angle enclosed between the human finger and the soft-touch surfaces and the vibrations induced in the human finger during relative motion at a time. To validate the measurement stand, a test series has been conducted on two soft-touch surfaces of different roughness. While the individual results agree well with the literature, their combination revealed new insights. Finally, the investigation of the haptics of polymer coatings with the presented measuring system should facilitate the design of surfaces with tailor-made touch-feel properties.
... Tada et al. [24] developed a tactile-skin based on polyvinylidene fluoride (PVDF) films and strain gauges that covers the palm and the fingertips of a robotic hand. Takamuku et al. [25] use these sensors to squeeze and tap objects placed into the robotic hand. They employ a self-organized map (SOM) to cluster materials according to haptic properties like softness. ...
Conference Paper
We present an embodied neural model for haptic object classification by active haptic exploration with the humanoid robot NICO. When NICO’s newly developed robotic hand closes around an object, multiple sensory readings from a tactile fingertip sensor, motor positions, and motor currents are recorded. We created a haptic dataset with 83200 haptic measurements, based on 100 samples of each of 16 different objects, every sample containing 52 measurements. First, we provide an analysis of neural classification models with regard to isolated haptic sensory channels for object classification. Based on this, we develop a series of neural models (MLP, CNN, LSTM) that integrate the haptic sensory channels to classify explored objects. As an initial baseline, our best model achieves a 66.6% classification accuracy over 16 objects. We show that this result is due to the ability of the network to integrate the haptic data both over time domain and over different haptic sensory channels. Furthermore, we make the dataset publically available to address the issue of sparse haptic datasets for machine learning research.
... For instance, [8] use haptic feedback to classify both deformable and rigid objects, [9] use haptic feedback to classify haptic adjectives. Furthermore, haptic feedback has also been used to infer the object properties of deformable objects [10], [11], [12]. Gemici and Saxena [13] use tactile feedback along with other robot data, such as poses, to determine the physical properties of different food items. ...
Preprint
Full-text available
Cutting is a common form of manipulation when working with divisible objects such as food, rope, or clay. Cooking in particular relies heavily on cutting to divide food items into desired shapes. However, cutting food is a challenging task due to the wide range of material properties exhibited by food items. Due to this variability, the same cutting motions cannot be used for all food items. Sensations from contact events, e.g., when placing the knife on the food item, will also vary depending on the material properties, and the robot will need to adapt accordingly. In this paper, we propose using vibrations and force-torque feedback from the interactions to adapt the slicing motions and monitor for contact events. The robot learns neural networks for performing each of these tasks and generalizing across different material properties. By adapting and monitoring the skill executions, the robot is able to reliably cut through more than 20 different types of food items and even detect whether certain food items are fresh or old.
Article
Autonomous feeding is challenging because it requires manipulation of food items with various compliance, sizes, and shapes. To understand how humans manipulate food items during feeding and to explore ways to adapt their strategies to robots, we collected human trajectories by asking them to pick up food and feed it to a mannequin. From the analysis of the collected haptic and motion signals, we demonstrate that humans adapt their control policies to accommodate to the compliance and shape of the food item being acquired. We propose a taxonomy of manipulation strategies for feeding to highlight such policies. As a first step to generate compliance-dependent policies, we propose a set of classifiers for compliance-based food categorization from haptic and motion signals. We compare these human manipulation strategies with fixed position-control policies via a robot. Our analysis of success and failure cases of human and robot policies further highlights the importance of adapting the policy to the compliance of a food item.
Article
Functional prosthetics hands which have the ability to help amputees perform tasks in daily life have been developed over many years. These hands need a control system which is fed information from sensors mounted on a prosthetic hand and human–machine interface. A variety of sensors therefore been developed for the prosthetic hand to measure fingertip force, joint angle (position), object slip, texture and temperature. However, most of the strain/stress sensors are attached to the fingertip. In this paper, the potential positions for strain sensors on the side of the finger link of the prosthetic hand are investigated that, in the future, will allow for force control in a lateral or key grip. With modified links of a Southampton Hand, some promising positions for strain sensors have been determined. On some of the links, the strain sensor can be used as an indicator to show the angle of the finger during a curling operation.
Article
Full-text available
This paper aims to display the numerous and various competencies of the infant's hands for processing information about objects shape. Several experiments on infants from birth and up to five months of age using an habituation/dishabituation procedure, intermodal transfer task between touch and vision, and various cognitive tasks revealed that infants might perceive and understand the physical world through their hands without visual control. From birth, infants can habituate to shape and detect discrepancies between shapes equally well with either hand. However, beyond this first level of information processing, as soon as the tasks impose a cognitive load, such as memory, asymmetry between hands and non-reversible transfer between hands and eyes are evident during the first five months. In contrast, when infants abstract information from an event not totally felt or seen, amodal mechanisms underlie haptic and visual knowledge in early infancy.
Article
Full-text available
This paper presents a unique design for tactile sensing: embedding as many receptors as possible randomly in soft material so as to provide different kinds of sens-ing modalities. Based on this design principle, an anthropomorphic fingertip has been developed. The fingertip consists of two silicon rubber layers of different hardness containing two kinds of receptors, strain gauges and PVDF (polyvinylidene fluoride) films distributed randomly as receptors. The experimental results are shown to demon-strate its sensing ability such as object discrimination.
Article
Full-text available
Results from neuroscience suggest that the brain has a unified representation of objects involving visual, haptic, and motor information. This representation is the result of a long process of exploration of the environment linking the sensorial appearance of objects with the actions that they afford. Somewhat inspired by these results, in this paper, we provide support to the view that such representation is required for certain skills to emerge in an artificial system and present the first experiments along the route.
Chapter
Full-text available
Compliant pads greatly contribute to increase the robustness and the stability of grasps of robot hands, because of their conformability to the objects’ surfaces, the capability to damp dynamic effects and to dissipate repetitive strains, the enlarged contact areas they allow. On the other hand, besides the difficulty to obtain a precise model of soft pads, they appears not suitable to achieve stiff and accurate grasps in those tasks that require high stiffness and precision. In this chapter, the normal and tangential stiffnesses of soft materials have been experimentally investigated in order to demonstrate their suitability with the development of compliant pads for robotic hands. Since these stiffnesses strongly depends on the applied load, a control approach , exploiting such relation, is proposed in order to “arbitrarily” change the overall stiffness of the hand. In this sense, the perspective of this control strategy, differently from the traditional one that “makes more compliant a stiff system” , is to use the internal forces of the grasp to “make stiffer a compliant system”.
Conference Paper
Full-text available
In this work the static and dynamic characterization of viscoelastic pads for robotic hands is performed. A quasi-linear model, developed to describe the behavior of human hand pads and, more generally, of biological tissues, is adopted in order to overcome the problems tied to classical (linear) models, often used in the robotic field. Through experimental tests, the values of the parameters of this model have been found for two different materials (a polyurethane gel and a silicon rubber) which show a behavior similar to that of human pads and seem very suitable for robotic applications. Finally, the model has been extended with the use of digital filters, making the identification process very straightforward.
Article
discussion focuses primarily on the use of exploratory movements for the purposes of haptic perception and recognition consider the relations between haptic exploration and manipulation of objects image-mediation model of haptics / direct apprehension model of haptics links between classes of object knowledge and exploratory procedures / the sequence of haptic processing / the nature of the correspondence between haptic and visual processing the work reported here constitutes one of the first attempts to provide a comprehensive conceptual framework and a systematic empirical program for developing a model of human haptic object exploration and processing the work also has considerable potential for the new field of perceptual robotics (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Recent advances in the science of human movement have enabled developmental psychologists to discover unique patterns of organization and control in infant motor behavior and development, provoking a resurgence of interest in this topic. In this article, we emphasize the role that motor development may play in determining developmental sequences or “timetables” in other domains. Specifically, we argue that particular motor achievements may be integral to developments in the domains of haptic perception and depth perception. In both cases, there is a high degree of fit between the developmental sequence in which certain perceptual sensitivities unfold and the ages at which the corresponding motor abilities onset. The discussions may provide new contexts in which to consider the developments of haptic perception and depth perception. The general purpose, however, is to highlight the wide-ranging influence of motor development during infancy.
Conference Paper
A new fully embedded tactile/force sensor system to be installed on the phalanges of a robot hand is presented in this paper. Each sensor consists of a distributed array of analog tactile elements and an integrated three-component force transducer with embedded analog and digital electronics. The tactile sensor is a matrix of electrodes etched on a flexible printed circuit board covered by pressure conductive rubber. The force sensor is an off-the-shelf integrated three components micro-joystick. Communications of the sensors with other control modules are implemented with a CAN bus link. A local processing technique is proposed to enable the transmission of both tactile and force data using a single CAN message, and eventually validate through simulative and experimental tests
Article
Haptic texture perception was investigated in 3-day-old infants. Following a habituation sequence with a smooth or granular object, babies received a haptic test during which they held either the familiar or a new textured object. Two dependent measures were recorded: (1) Holding time was used to assess habituation as well as reaction to novelty and (2) hand pressure frequency exerted on the object was used to investigate neonates’ capacity to adjust their manipulation to the texture of objects. Both measures revealed neonates’ capacity to haptically perceive the texture of objects. From the results, it can be concluded that neonate’s manual cyclical activity recorded by HPF is a primary exploratory tool that is both necessary and sufficient to obtain knowledge about texture.
Article
Three studies were conducted using a multiple trials procedure to investigate 3-month-old infants’ ability to perceive material properties with their hands. In Study 1, infants discriminated stimuli which differed from one another in texture, temperature, compliance, and weight. In Study 2, infants discriminated stimuli which differed from one another only in weight. Infants evidenced discrimination by holding one stimulus longer than the other over trials, by involving both hands, and by increasing their general level of activity. A critical feature of both Studies 1 and 2 was that the stimuli were presented to infants in the pitch dark. In Study 3, infants were given the same stimuli as in Study 2, but under normal lighting conditions; they provided no robust evidence for discriminating the stimuli. We conclude that 3-month-old infants are able to perceive material properties, including weight differences, with their hands. However, they do so only under conditions which confine their attention, and they do not explore these properties with the specialized hand movements used by older infants and adults.