Figure 4 - uploaded by Tony Belpaeme
Content may be subject to copyright.
Interaction with the iCub robot in the ungrouped condition of experiment 1

Interaction with the iCub robot in the ungrouped condition of experiment 1

Source publication
Conference Paper
Full-text available
Language is special, yet its power to facilitate communication may have distracted researchers from the power of another, potential precursor ability: the ability to label things, and the effect this can have in transforming or extending cognitive abilities. In this paper we present a simple robotic model, using the iCub robot, demonstrating the ef...

Context in source publication

Context 1
... the ungrouped condition a similar method was used but the location of each of the three training objects was not identical. To facilitate running the experiment, the objects were hidden behind a screen and lifted into sight in turn (see Figure 4 below) and each accompanied by the spoken word 'RED'. Testing was identical to that used in the grouped condition. ...

Similar publications

Conference Paper
Full-text available
This paper proposes new walking pattern method for the humanoid robots. The proposed method based on the inverted pendulum uses the zero moment point(ZMP) and the center of mass(CoM). And our approach consists of feedforward controller and feedback controller. As the feedforward controller, pole-zero cancellation by series approximation(PZCSA) cont...
Article
Full-text available
Traditional industrial applications involve robots with limited mobility. Consequently, interaction (e.g. manipulation) was treated separately from whole-body posture (e.g. balancing), assuming the robot firmly connected to the ground. Foreseen applications involve robots with augmented autonomy and physical mobility. Within this novel context, phy...
Conference Paper
Full-text available
This paper presents the development of generating dance performances of humanoid robot, HSR (HanSaRam)-VII, synchronized with Robonova, for entertainment. This heterogeneous team, RoboBees, participated in 'Robots at Play Award 2006' and nominated as one of top six teams in the world. A method of generating and combining both periodic motion (on-li...
Conference Paper
Full-text available
Spatial scaffolding is a naturally occurring human teaching behavior, in which teachers use their bodies to spatially structure the learning environment to direct the attention of the learner. Robotic systems can take advantage of simple, highly reliable spatial scaffolding cues to learn from human teachers. We present an integrated robotic archite...
Article
Full-text available
The goal of the CoSy project is to create cognitive robots to serve as a testbed of theories on how humans work (13), and to identify problems and techniques relevant to producing general-purpose human- like domestic robots. Given the constraints on the resources available at the robot's disposal and the complexity of the tasks that the robot has t...

Citations

... The first approach has been developed within the iTALK (Integration and Transfer of Action and Language Knowledge in Robots) project [63,75,89]. The model is inspired by infant development up to 2 years of age. ...
Article
Most semantic models employed in human-robot interactions concern how a robot can understand commands, but in this article the aim is to present a framework that allows dialogic interaction. The key idea is to use events as the fundamental structures for the semantic representations of a robot. Events are modeled in terms of conceptual spaces and mappings between spaces. It is shown how the semantics of major word classes can be described with the aid of conceptual spaces in a way that is amenable to computer implementations. An event is represented by two vectors, one force vector representing an action and one result vector representing the effect of the action. The two-vector model is then extended by the thematic roles so an event is built up from an agent, an action, a patient, and a result. It is shown how the components of an event can be put together to semantic structures that represent the meanings of sentences. It is argued that a semantic framework based on events can generate a general representational framework for human-robot communication. An implementation of the framework involving communication with an iCub will be described.
... A computational implementation of this has been applied to an account of the developmental acquisition of concepts [17]: not only was the system able to complete the task with a high success rate, but also the errors it made were consistent with those made by humans. A similar computational implementation has also been used to demonstrate how word labels for real-world objects can facilitate further cognitive processing [18]. These examples provide a glimpse of the range of cognitive processing (relevant to human cognitive processing) that can be accounted for using the memorycentred perspective. ...
... For example using the same mechanism, accounts have been made of concept acquisition [17] and multi-modal robot behaviour alignment to an interaction partner [14]. Other systems using the same principles have been used to demonstrate the development of low-level sensory-motor coordination through experience [16], and the role of words in supporting new cognitive capabilities [18]. ...
Article
Full-text available
The Memory-Centred Cognition perspective places an active association substrate at the heart of cognition, rather than as a passive adjunct. Consequently, it places prediction and priming on the basis of prior experience to be inherent and fundamental aspects of processing. Social interaction is taken here to minimally require contingent and co-adaptive behaviours from the interacting parties. In this contribution, I seek to show how the memory-centred cognition approach to cognitive architectures can provide an means of addressing these functions. A number of example implementations are briefly reviewed, particularly focusing on multi-modal alignment as a function of experience-based priming. While there is further refinement required to the theory, and implementations based thereon, this approach provides an interesting alternative perspective on the foundations of cognitive architectures to support robots engage in social interactions with humans.
... McMahon et al. (2012) developed a method for learning haptic adjectives from interactions whereas Petrosino and Gold (2010), Dindo and Zambuto (2010), and Chella et al. (2009) studied learning color, size and distance related adjectives based on visual features. Similar studies ( Chauhan and Lopes, 2011;Haazebroek et al., 2011;Sugita et al., 2011;Glenberg and Gallese, 2011;Morse et al., 2011;Gold et al., 2009) proposed methods for learning object categories; however, systematic evaluation of nouns and adjectives based on appearance and affordances has not been performed previously. ...
Article
We study how a robot can link concepts represented by adjectives and nouns in language with its own sensorimotor interactions. Specifically, an iCub humanoid robot interacts with a group of objects using a repertoire of manipulation behaviors. The objects are labeled using a set of adjectives and nouns. The effects induced on the objects are labeled as affordances, and classifiers are learned to predict the affordances from the appearance of an object. We evaluate three different models for learning adjectives and nouns using features obtained from the appearance and affordances of an object, through cross-validated training as well as through testing on novel objects. The results indicate that shape-related adjectives are best learned using features related to affordances, whereas nouns are best learned using appearance features. Analysis of the feature relevancy shows that affordance features are more relevant for adjectives, and appearance features are more relevant for nouns. We show that adjective predictions can be used to solve the odd-one-out task on a number of examples. Finally, we link our results with studies from psychology, neuroscience and linguistics that point to the differences between the development and representation of adjectives and nouns in humans.
... It has been established that the learning of labels for objects is mediated by body posture -in a robotic model (Morse et al. 2010a), and in infants (Smith and Samuelson 2010) -where changes in body posture prime different representations (an example of association formation providing the substrate for cross-modal priming). Based on the proposal that learned labels can be subsequently used to extend cognitive capabilities through scaffolding (Clark 2008), an extension to the label learning setup was proposed, where learned labels enabled an overlapping categorisation task to be completed, which could not be completed if the labels had not been first learned (Morse et al. 2011). Cognition is therefore extended by this use of labels, the learning of which is conducted on a purely associative basis. ...
Article
Full-text available
In the context of cognitive architectures, memory is typically considered as a passive storage device with the sole purpose of maintaining and retrieving infor-mation relevant to ongoing cognitive processing. If memory is instead considered to be a fundamentally active aspect of cognition, as increasingly suggested by empirically-derived neurophysiological theory, this passive role must be reinterpreted. In this perspec-tive, memory is the distributed substrate of cognition, forming the foundation for cross-modal priming, and hence soft cross-modal coordination. This paper seeks to describe what a cognitive architecture based on this perspective must involve, and begins to explore how human-level cognitive competencies (namely episodic memory, word label conjunction learning, and social be-haviour) can be accounted for in such a low-level frame-work. This proposal of a memory-centred cognitive ar-chitecture presents new insights into the nature of cog-nition, with benefits for computational implementations such as generality and robustness that have only begun to be exploited.
Chapter
Sensorimotor theories of perception are highly appealing to A.I. due to their apparent simplicity and power; however, they are not problem free either. This paper will presents a frank appraisal of sensorimotor perception discussing and highlighting the good, the bad, and the ugly with respect to a potential sensorimotor A.I.
Chapter
This chapter sets the stage for the research on autonomous flight of the DelFly. First, a general introduction is given to artificial intelligence for robotics. This will permit the layman to understand some of the major challenges in creating autonomous robots, and the different solution approaches. Subsequently, the particular tasks and approaches to autonomous flight of Micro Air Vehicles are discussed. Whereas mainstream approaches are now being applied to \(\sim \)1 kg MAVs such as quadrotors, it will become clear that light-weight flapping wing MAVs are best served by a computationally efficient, vision-based approach to autonomous flight. In particular, in the DelFly project we have adopted a purposive vision approach to autonomous flight, in which only the information necessary for the robot’s task is extracted. The main goal of the approach is to allow the DelFly to autonomously explore unknown indoor environments, to which end we complement optical flow with other, appearance based, visual inputs.
Article
It is now widely accepted that concepts and conceptualization are key elements towards achieving cognition on a humanoid robot. An important problem on this path is the grounded representation of individual concepts and the relationships between them. In this article, we propose a probabilistic method based on Markov Random Fields to model a concept web on a humanoid robot where individual concepts and the relations between them are captured. In this web, each individual concept is represented using a prototype-based conceptualization method that we proposed in our earlier work. Relations between concepts are linked to the cooccurrences of concepts in interactions. By conveying input from perception, action, and language, the concept web forms rich, structured, grounded information about objects, their affordances, words, etc. We demonstrate that, given an interaction, a word, or the perceptual information from an object, the corresponding concepts in the web are activated, much the same way as they are in humans. Moreover, we show that the robot can use these activations in its concept web for several tasks to disambiguate its understanding of the scene.
Conference Paper
Learning and conceptualizing words categories such as verbs, nouns and adjectives in language based on sensorimotor interactions of a robot is a challenging topic in cognitive robotics. In this article, we summarize our approach that is based on first learning affordances of objects by interacting with them, and then, learning and conceptualizing verbs, nouns and adjectives from these interactions.
Conference Paper
In cognitive robotics community, categories belonging to adjectives and nouns have been learned separately and independently. In this article, we propose a prototype-based framework that conceptualize adjectives and nouns as separate categories that are, however, linked to and interact with each other. We demonstrate how this co-learned concepts might be useful for a cognitive robot, especially using a game called “What object is it?” that involves finding an object based on a set of adjectives.