Anthony F. Morse's research while affiliated with University of Plymouth and other places

What is this page?


This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

Publications (49)


When Object Color Is a Red Herring: Extraneous Perceptual Information Hinders Word Learning via Referent Selection
  • Article

January 2019

·

50 Reads

·

3 Citations

IEEE Transactions on Cognitive and Developmental Systems

·

Katherine E. Twomey

·

Anthony F. Morse

·

[...]

·

Learning words from ambiguous naming events is difficult. In such situations, children struggle with not attending to task irrelevant information when learning object names. The current study reduces the problem space of learning names for object categories by holding color constant between the target and other extraneous objects. We examine how this influences two types of word learning (retention and generalization) in both 30-month-old children (Experiment 1) and the iCub humanoid robot (Experiment 2). Overall, all children and iCub performed well on the retention trials, but they were only able to generalize the novel names to new exemplars of the target categories if the objects were originally encountered in sets with objects of the same colors, not if the objects were originally encountered in sets with objects of different colors. These data demonstrate that less information presented during the learning phase narrows the problem space and leads to better word learning success for both children and iCub. Findings are discussed in terms of cognitive load and desirable difficulties.

Share

Figure 1: Initial setup of the experiment, with an iCub robot looking at a table with unknown objects. 
Learn, plan, remember: A developmental robot architecture for task solving
  • Conference Paper
  • Full-text available

September 2017

·

225 Reads

·

2 Citations

This paper presents a robot architecture heavily inspired by neuropsychology, developmental psychology and research into "executive functions" (EF) which are responsible for the planning capabilities in humans. This architecture is presented in light of this inspiration, mapping the modules to the different functions in the brain. We emphasize the importance and effects of these modules in the robot, and their similarity to the effects in humans with lesions on the frontal lobe. Developmental studies related to these functions are also considered, focusing on how they relate to the robot's different modules and how the developmental stages in a child relate to improvements in the different modules in this system. An experiment with the iCub robot is compared with experiments with humans, strengthening this similarity. Furthermore we propose an extension to this system by integrating with "Epigenetics Robotic Architecture" (ERA), a system designed to mimic how children learn the names and properties of objects. In the previous implementation of this architecture, the robot had to be taught the names of all the necessary objects before plan execution, a learning step that was entirely driven by the human interacting with the robot. With this extension, we aim to make the learning process fully robot-driven, where an iCub robot will interact with the objects while trying to recognise them, and ask a human for input if and when it does not know the objects' names.

Download

Children's referent selection and word learning: Insights from a developmental robotic system

October 2016

·

353 Reads

·

15 Citations

Interaction Studies

It is well-established that toddlers can correctly select a novel referent from an ambiguous array in response to a novel label. There is also a growing consensus that robust word learning requires repeated label-object encounters. However, the effect of the context in which a novel object is encountered is less wellunderstood. We present two embodied neural network replications of recent empirical tasks, which demonstrated that the context in which a target object is encountered is fundamental to referent selection and word learning. Our model offers an explicit account of the bottom-up associative and embodied mechanisms which could support children's early word learning and emphasises the importance of viewing behaviour as the interaction of learning at multiple timescales.


How context affects early language acquisition: An embodied model of early referent selection and word learning.

September 2016

·

335 Reads

Word learning is central to language acquisition. From as early as 18 months, toddlers can disambiguate the referent of a novel label from an ambiguous array, and reinforce this label-object association over repeated encounters. While cross-situational word learning has been demonstrated repeatedly, the cognitive mechanisms which underlie early word learning are less well-understood. To explore these processes we replicated two recent studies of early word learning using the iCub robot (Metta at al., 2010) and the Epigenetic Robotics Architecture (Morse, de Greef, Belpaeme & Cangelosi, 2011), instantiating word learning as a process of learning simple associations between visual and lexical input in an embodied system. Simulations captured the empirical results, and demonstrated that the context in which a to-be-learned object is encountered – both visual and temporal – is critical to referent selection and word learning. Our model offers an explicit account of the bottom-up associative and embodied mechanisms which support children’s word learning and emphasises the importance of viewing behaviour as the interaction of learning at multiple timescales. In particular, this work highlights the importance of the micro-level dynamics that emerge from the interaction between the movement of the body and the in-task context, making the prediction for future empirical work that the physical environment in which early learning events occur may have important consequences for language acquisition.


Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development

September 2016

·

203 Reads

·

46 Citations

Cognitive Science A Multidisciplinary Journal

Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to “switch” between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture (Morse et al 2010) embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills.


Social Development of Artificial Cognition

March 2016

·

73 Reads

·

4 Citations

Recent years have seen a growing interest in applying insights from developmental psychology to build artificial intelligence and robotic systems. This endeavour, called developmental robotics, not only is a novel method of creating artificially intelligent systems, but also offers a new perspective on the development of human cognition. While once cognition was thought to be the product of the embodied brain, we now know that natural and artificial cognition results from the interplay between an adaptive brain, a growing body, the physical environment and a responsive social environment. This chapter gives three examples of how humanoid robots are used to unveil aspects of development, and how we can use development and learning to build better robots. We focus on the domains of word-meaning acquisition, abstract concept acquisition and number acquisition, and show that cognition needs embodiment and a social environment to develop. In addition, we argue that Spiking Neural Networks offer great potential for the implementation of artificial cognition on robots.



Embodied Language Learning and Cognitive Bootstrapping: Methods and Design Principles

January 2016

·

567 Reads

·

18 Citations

International Journal of Advanced Robotic Systems

International Journal of Advanced Robotic Systems

Co-development of action, conceptualization and social interaction mutually scaffold and support each other within a virtuous feedback cycle in the development of human language in children. Within this framework, the purpose of this article is to bring together diverse but complementary accounts of research methods that jointly contribute to our understanding of cognitive development and in particular, language acquisition in robots. Thus, we include research pertaining to developmental robotics, cognitive science, psychology, linguistics and neuroscience, as well as practical computer science and engineering. The different studies are not at this stage all connected into a cohesive whole; rather, they are presented to illuminate the need for multiple different approaches that complement each other in the pursuit of understanding cognitive development in robots. Extensive experiments involving the humanoid robot iCub are reported, while human learning relevant to developmental robotics has also contributed useful results. Disparate approaches are brought together via common underlying design principles. Without claiming to model human language acquisition directly, we are nonetheless inspired by analogous development in humans and consequently, our investigations include the parallel co-development of action, conceptualization and social interaction. Though these different approaches need to ultimately be integrated into a coherent, unified body of knowledge, progress is currently also being made by pursuing individual methods.


Fig 1.  The neural model controlling the iCub robot in ongoing learning.
External input to each field is constantly driven by visual input, momentary body posture, and online speech recognition. Internal input to each field is a spreading activation via associative connections subject to ongoing learning and via the body posture. Note: the neural model forms the highest layer of a subsumption architecture controlling the robot, further details are in the Supplementary Information to this paper. (The individual shown in this figure has given written informed consent (as outlined in PLOS consent form) to publish this image).
Fig 2.  The timeline of an individual in Experiment 1 (no-switch condition), showing the neural activity in the Vision, Posture, and Word Fields as well as the visual input to iCub at each step.
Fig 3.  The timeline of an individual in experiment 4 (interference task), showing the neural activity in the Vision, Posture, and Word Fields as well as the visual input to iCub at each step.
Fig 4.  Timeline of experiment 6 (above) and experiment 8 (below).
Steps 1–4 expose the infant to the target and foil objects in consistent left and right locations. In step 5 the infant is told ‘this is a modi’ while the objects are out of sight (hidden in buckets) in experiment 6, or while the foil object is in the target object location and being attended in experiment 8. Steps 6 & 7 repeat the original exposure of the target and foil, and in step 8 the infant is shown both objects in a new location and asked ‘where is the modi’. Experiments 7 and 9 follow the same timeline with the addition that step 5 occurs in a different posture from all other steps. (The individual shown in this figure has given written informed consent (as outlined in PLOS consent form) to publish this image).
Fig 5.  Comparison between the Child and Robot data showing the means of the proportion of correct choices (and standard error of the means) for all experiments, and using the low-learning rate robot data.
Dotted line denotes chance, p < 0.05. Specific values for the child data and the robot data is as follows: For the Original Baldwin task, when objects and names were separately linked to the same posture, the robot correctly mapped the name to the target (Exp1), M = 0.71 (SD = 0.41), at above chance levels, t(19) = 2.2, p < 0.05, d = 0.51. Infants also correctly mapped the name to the target (Exp6), M = 0.71 (SD = 0.20), at above chance levels, t(15) = 4.16, p < 0.001, d = 1.04. In Experiment 2, where the locations of objects was switched, the robot failed to map the name to the target, M = 0.46, p = 0.64, but did so reliably less often than in the standard Baldwin condition t(38) = 0.03, p < 0.05, d = 0.58. In the Baldwin task with posture change, when Step 5, the naming event, was experienced in a new posture, the robot (Exp3) and the infants (Exp7) failed to map the name to the object, both preforming at chance Robot; M = 0.42, p = 0.85, Child; M = 0.41, p = 0.16, and did so reliably less often than in the standard Baldwin Task where there was no posture shift; Robot; t(38) = 2.49, p < 0.05, d = 0.78, Infant; t(30) = 3.73, p < 0.001, d = 1.32. In the Interference task, the toddlers showed the same interference effect as the robot, and as the toddlers in Samuelson et al., 2011; when the target object was explicitly named at a location and posture associated with the distractor object, both the robot (Exp4) and children (Exp8) selected the target referent at below chance levels however only the child data was significantly below chance, M = 0.36 (SD = 0.4), t(19) = -1.5, p = 0.07, d = 0.34, Robot data p = 0.07. For the Interference task with a posture change, when the Phase 1 experiences were distinguished from the Phase 2 naming events by a poster shift, although performance was not above chance, both children (Exp9) and the robot (Exp5) the interference effect present in Experiment 4 & 8 was reduced p = 0.09 & p = 0.13 respectively. However for both child data and robot data the named target in the posture shift condition was reliably selected more often than when there was no posture shift Child; t(30) = -2.59, p < 0.05, d = 0.91, Robot; t(38) = -1.87, p < 0.05, d = 0.24.
Posture Affects How Robots and Infants Map Words to Objects

March 2015

·

320 Reads

·

55 Citations

For infants, the first problem in learning a word is to map the word to its referent; a second problem is to remember that mapping when the word and/or referent are again encountered. Recent infant studies suggest that spatial location plays a key role in how infants solve both problems. Here we provide a new theoretical model and new empirical evidence on how the body - and its momentary posture - may be central to these processes. The present study uses a name-object mapping task in which names are either encountered in the absence of their target (experiments 1-3, 6 & 7), or when their target is present but in a location previously associated with a foil (experiments 4, 5, 8 & 9). A humanoid robot model (experiments 1-5) is used to instantiate and test the hypothesis that body-centric spatial location, and thus the bodies' momentary posture, is used to centrally bind the multimodal features of heard names and visual objects. The robot model is shown to replicate existing infant data and then to generate novel predictions, which are tested in new infant studies (experiments 6-9). Despite spatial location being task-irrelevant in this second set of experiments, infants use body-centric spatial contingency over temporal contingency to map the name to object. Both infants and the robot remember the name-object mapping even in new spatial locations. However, the robot model shows how this memory can emerge -not from separating bodily information from the word-object mapping as proposed in previous models of the role of space in word-object mapping - but through the body's momentary disposition in space.


Fig. 1: Risultati del modello robotico e quelli degli esperimenti con bambini in diverse condizioni sperimentali. Esperimento 1: Attenzione del bambino attirata alla posizione dell'oggetto target senza oggetto presente durante il naming. Esperimento 2: Stessa procedura ma posizione dell'oggetto cambiata. Esperimento 3: Cambiamento di postura introdotta durante il naming (dati con bambini non disponibili). Esperimento 4: Segue esperimento 1 ma oggetto distrattore adesso si trova nella posizione del naming. Esperimento 5: Replica esperimento 4 ma con cambiamento di postura (dati con bambini non disponibili). Dati e analisi degli esperimenti 3 e 5 con bambini sono in via di pubblicazione. 
Fig. 2: Simulazione di counting gestures con l'iCub (a sinistra). Le traiettorie ottenute dagli angoli delle giunture, elaborati ex-post con Principal Compo- nent Anaysis, hanno fornito le informazioni fornite alla rete neurale (destra). 
L'apprendimento linguistico e numerico nei "developmental robots"

March 2015

·

153 Reads

Language and number learning in developmental robots. Developmental Robotics is the interdisciplinar4 approach to the autonomous design of behavioural and cognitive capabilities in artiwcial agents that takes direct inspiration from the developmental principles and mechanisms observed in natural cognitive s4stems. This approach puts strong emphasis on constraining the robot's cognitive architecture and behavioural and learning performance onto kno=n child ps4cholog4 theories and data, allo=ing the modelling of the developmental succession of qualitative and quantitative stages leading to the acquisition of adult-like cognitive skills. In this paper =e present a set of studies based on the developmental robotics approach looking speciwcall4 at the modelling of embodied phenomena in the acquisition of linguistic and numerical cognition capabilities.


Citations (41)


... Multiple lines of research have shown that word learning is influenced by the perceived context in which a new word is learned, in particular by the physical characteristics of the referent and the objects that surround it (Horst, Scott, & Pollard, 2010;Horst, Twomey, Morse, Nurse, & Cangelosi, 2020;Horst & Samuelson, 2008;Houston-Price et al., 2006;Smith, Colunga, & Yoshida, 2010;Yu & Smith, 2012). For example, children's word learning skills are enhanced when they encounter fewer competitors and when those competitors are familiar (Horst et al., 2010). ...

Reference:

Perceptual dissimilarity, cognitive and linguistic skills predict novel word retention, but not extension skills in Down syndrome
When Object Color Is a Red Herring: Extraneous Perceptual Information Hinders Word Learning via Referent Selection
  • Citing Article
  • January 2019

IEEE Transactions on Cognitive and Developmental Systems

... Perkembangan bahasa anak sangatlah mengagumkan karena dalam waktu singkat anak mampu mengusai bahasa yang sangat kompleks. Penelitian mengemukakan bahwa bayi sangatlah peka terhadap bahasa yang diungkapkan oleh lingkungannya, terlihat dari sensitifitas bayi terhadap perubahan fonem yang terjadi bahkan ketika fonem tersebut berubah menjadi bahasa non-pribumi (Morse & Cangelosi, 2017). Selain itu, sebelum bayi belajar bahasa, bayi akan memperhatikan dan membedakan dari suara yang ada di lingkungannya. ...

Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development
  • Citing Article
  • September 2016

Cognitive Science A Multidisciplinary Journal

... How does any learner begin to initiate the discovery of meaningful units in any target language? In theories of language acquisition, this problem is called the bootstrapping problem Caza & Knott, 2012;Lyon et al., 2016). 34 The Literary Source of the Term 3.3 The term "bootstrapping" is derived from a fable published in 1785 by Rudolf Eric Raspe [born 1737 -died 1794] (2020). ...

Embodied Language Learning and Cognitive Bootstrapping: Methods and Design Principles
International Journal of Advanced Robotic Systems

International Journal of Advanced Robotic Systems

... While there is only a small number of robotic approaches dealing with explicit internal simulation, most of these are using very simple robotic architectures with only a very small number of degrees of freedom [for example see Svensson et al. (2009) or Chersi et al. (2013]. It should further be mentioned that predictive models are also used to anticipate the visual effects of the robot's movements (e.g., Hoffmann, 2007;Möller and Schenck, 2008). ...

Representation as internal simulation: Aminimalistic robotic model
  • Citing Article
  • January 2009

... In a similar vein, recent work in robotics underlines the need for new robots to be equipped with mechanisms of neural and perceptual readiness (Cangelosi and Schlesinger, 2015) but also with "affects" mediating and regulating the sensorimotor behaviors (Zhong et al., 2016). Testing these robots would allow a breakthrough on developmental models, providing crucial hints on the developmental stages of affective subjectivity, to finally scrutinize the interplay between the adaptive brain, the growing body, the responsive social context, and the physical environment (Belpaeme et al., 2016). ...

Social Development of Artificial Cognition
  • Citing Chapter
  • March 2016

... Importantly, children's word learning and attention are fundamentally inter-related. During fast mapping, children must focus their attention on a novel word's intended referent while excluding non-target competitors (Twomey et al., 2016). This requires children to navigate their attention across multiple components of the learning environment and coordinate their attention to corresponding audio-visual stimuli during naming events (Samuelson et al., 2017). ...

Children's referent selection and word learning: Insights from a developmental robotic system

Interaction Studies

... A novel interdisciplinary research paradigm, known as Developmental Neuro-Robotics (DNR), has been recently introduced (Cangelosi and Schlesinger, 2015;Krichmar, 2018;Di Nuovo, 2020) with the aim to create biologically plausible robots, whose control units directly model some aspect of the brain. DNR is still making its first steps, but it has been already successfully applied in the modelling of embodied word learning as well as the development of perceptual, social, language, and abstract cognition (Asada et al., 2009;Di Nuovo et al., 2013;Cangelosi et al., 2016;Cangelosi and Stramandinoli, 2018;Nocentini et al., 2019). A research area of interest for DNR is the development of numerical cognition (Di Nuovo and Jay, 2019;Di Nuovo and McClelland, 2019), which focuses on the use of fingers and gestures to support the initial learning of digits (Di Nuovo, 2020;Pecyna et al., 2020) as it has been found by numerous developmental psychology and neuro-imaging studies (Goldin-Meadow et al., 2014;Soylu et al., 2018). ...

Embodied language and number learning in developmental robots

... In the context of online systems for robotics, research has focused on how result verbs can be modelled e.g. [19]– [22]. However, when it comes to human-robot interaction, the robot should also be able to recognize human actions by the manner they are performed. ...

ITALK: Integration and transfer of action and language knowledge in robots
  • Citing Article
  • January 2010

... Human cognitive process has been extensively studied in cognitive psychology, cognitive neuroscience and cognitive informatics. Two of the most classical models are multi-store model and working memory model (Ruini et al., 2012). They abstract the human cognitive process into several memory modules which have been used until now. ...

Towards a Bio-Inspired Cognitive Architecture for Short-Term Memory in Humanoid Robots

... While this research was relevant to our topic and helped inform our thinking, the study was done with high school students, and so is not included in the discussion below; likewise, Khawla Badwan's (2021) work with 18-25-year-olds in Manchester was excluded from the final review. Similarly, we encountered several articles in the field of developmental robotics that explicitly theorize place: for instance, Morse et al. (2015) suggest that posture and spatial positioning interfere with mapping new vocabulary to novel objects for both robots and infants. Again, while the research was relevant because of its theorization of the relationship between vocabulary emergence and place, we felt that the emphasis on robots (no matter how 'developmental') made the study too tangential for inclusion in our short review here. ...

Posture Affects How Robots and Infants Map Words to Objects
PLOS ONE

PLOS ONE