Figure 3 - uploaded by Fey Parrill
Content may be subject to copyright.
a. Cartoon event: character zipping mouth shut 

a. Cartoon event: character zipping mouth shut 

Source publication
Article
Full-text available
This paper examines gestures that simultaneously express multiple physical perspectives, known as dual viewpoint gestures. These gestures were first discussed in McNeill’s 1992 book, Hand and mind . We examine a corpus of approximately fifteen hours of narrative data, and use these data to extend McNeill’s observations about the different possibili...

Similar publications

Article
Full-text available
Previous research has found that iconic gestures (i.e., gestures that depict the actions, motions or shapes of entities) identify referents that are also lexically specified in the co-occurring speech produced by proficient speakers. This study examines whether concrete deictic gestures (i.e., gestures that point to physical entities) bear a differ...
Article
Full-text available
In the present case study, we aimed to explore whether there were any differences between monolingual Turkish and Turkish-Italian bilingual children in terms of their use of language structures in Turkish while narrating a story from a picture book titled “Frog, where are you?” by Mayer (1969). Four monolingual Turkish and three Turkish-Italian bil...

Citations

... But they then study "dual viewpoint" gestures, in which either the two hands or one hand and the body (or another body part) take on different perspectives: dual viewpoint gestures "suggest that a speaker is taking multiple spatial perspectives on a scene at the same time . . . a rather impressive cognitive feat" (Parrill 2009). ...
... In role shift, this property enables multiple demonstrations, that is, signers can use different articulators to demonstrate linguistic and non-linguistic actions of two or more protagonists or different aspects of the same event simultaneously (Barberà and Quer 2018;Dudis 2004;Herrmann and Pendzich 2018;Steinbach 2021). The body of the signer cannot only be used to express the grammatical role of the subject; the body can also be used to express the actor of an event and different body parts can be used to express multiple demonstrations involving more than one actor (for multiple demonstrations in cospeech gestures, see Parrill 2009 All three aspects can be nicely illustrated by sign language versions of classical narratives (Cormier et al. 2013;Crasborn et al. 2007). For this purpose, I selected two annotated sequences taken from two classic Aesop fables from the Göttingen fable corpus (for a more detailed description of the fables, see Herrmann and Pendzich 2018). ...
... In addition to the aspects discussed in this article, such a theory needs to integrate the iconic potential of gestural demonstrations and clearly identify the iconic aspects relevant for the interpretation of a demonstration (Clark and Gerrig 1990;Perniss et al. 2010;Schlenker 2017b). And finally, such a theory should be integrated in a uniform theory of demonstrations in different modalities that accounts for modality-specific and modality-independent aspects of (gestural) demonstrations in different contexts (Dingemanse and Akita 2017;Ebert et al. 2020;Gawne and McCulloch 2019;Parrill 2009Parrill , 2010Schlenker 2018aSchlenker , 2018b. ...
Article
Full-text available
Sign languages make use of the full expressive power of the visual-gestural modality to report the utterances and/or actions of another person. A signer can shift into the perspective of one or more persons and reproduce the utterances or actions from the perspective of these persons. This modality-specific device of utterance and action report is called role shift or constructed action. Especially in sign language narration, role shift is a productive and expressive means that can be used to demonstrate linguistic and non-linguistic actions. Recent developments in sign language linguistics put forth new formal semantic analyses of role shift at the interface between sign language and gesture, integrating insights from classical cognitive and formal analyses of quotation, demonstration and perspective/context shift in spoken and sign languages. In this article, I build on recent accounts of role shift as a modality-specific device of demonstration and show that a modified version of this theory even accounts for cases of complex demonstrations including hybrid demonstrations, multiple demonstrations and demonstrations involving a complex interaction of gestural and linguistic components.
... Hinnell 2019;Jannedy & Mendoza-Denton 2005;Laparle 2021;Müller 2004), viewpoint (e.g. Parrill 2009Parrill , 2010Parrill , 2012, and speaker attitude (e.g. Calbris 2008;Teßendorf 2014;Wehling 2017). ...
Thesis
Full-text available
This dissertation examines the capacity of interactive gesture to contribute to discourse structure independent of accompanying speech. The close relationship between gesture and speech in face-to-face interaction is, at this point, well-established and accepted, especially within gesture studies and certain linguistic frameworks (e.g. Embodied Construction Grammar and Embodied Conversation Analysis). However, the integration of gesture into formal linguistic theory more generally is still in its early stages of development. This is especially true for interactive gesture and formal theories of discourse structure. To the author’s knowledge, this dissertation serves as the first in-depth exploration of the ability to formalize a theory of interactive meaning in gesture using a predictive model.
... Mais à travers ce type de geste le locuteur peut aussi adopter le point de vue de la personne qui accomplit l'action décrite (Character viewpoint) à la manière du discours rapporté. Cette dualité constitue d'ailleurs un critère dans les études menées sur les activités de narration pour observer quel point de vue le locuteur préfère adopter (Beattie & Shovelton, 2001 ;Merola, 2007 ;Parrill, 2009Parrill, , 2010Stites & Özçalışkan, 2017 ;Bressem et al., 2018 ;Parrill et al., 2018). McNeill (1992) appelle le geste iconique représentant une idée abstraite un geste métaphorique. ...
... Dans la narration les gestes référentiels abondent. D'un côté, le grand recours aux représentationnels s'expliquerait non seulement par la nécessité de représenter visuellement les référents introduits et maintenus dans le récit, mais aussi par la nécessité de reproduire gestuellement les actions accomplies par les personnages dans l'histoire (Beattie & Shovelton, 2001 ;Merola, 2007 ;Parrill, 2009Parrill, , 2010Stites & Özçalışkan, 2017 ;Bressem et al., 2018;Parrill et al., 2018). De l'autre côté, grâce aux gestes locatifs et aux pointages les enfants situent d'abord l'entité dans leur espace visuel (Kendon, 1996 ;Colletta, , 2004McNeill, 1992McNeill, , 2014) et y renvoient ensuite quand ils la mentionnent de nouveau (McNeill, 1992 ;So et al., 2009 ;Fantazi, 2010 conjointe. ...
Thesis
The gestures we use while we speak constitute a communicative tool which is inseparable from the verbal component of speech. It has been shown that the use of gestures has a positive effect on lexical retrieval and speech planning during discourse production in children with developmental language disorder. Gestures vary in form and function, according to speech genre, but their production could also depend on the lexical affiliate’s place within the different possible syntactic structures used in an utterance. The aim of this study is to analyze the multimodal behaviors of 23 children with language disorder and 23 typically developing children aged 7 to 10. We focused on whether the way the two groups build their discourse – and combine verbal syntax and gestures – reflects differences in relation to the presence of the language disorder, its severity as well as the type of activity in which the child is involved. Children were videotaped as they performed two different types of descriptions, a narrative task, a guessing game and in more spontaneous interaction with an adult. We analyzed gestures according to their form and function, and utterances according to their syntactic structure. The articulation of gesture and their lexical affiliates was also studied. Results show a different degree of multimodal complexity depending not only on the presence of the language disorder but also on the type of activity and discourse genre. At the gestural level, while TD children use gestures to enhance their utterances, DLD children also use them to compensate for language difficulties. Different multimodal profiles can be identified depending on how each child articulates gestures and the syntactic structures of their verbal productions.
... In both the spoken and signed language literatures there are discrepancies regarding which terms should be used for various aspects of the domain of viewpoint. "Viewpoint" as a term itself is perhaps overarching, and is the term favoured by theorists (e.g., Dancygier, 2012aDancygier, , 2012bMcNeill, 1992;Parrill, 2009;Sweetser, 2012;Vandelanotte, 2017), who see it as describing not only visual and physical experiences but for abstract construals as well. Nonetheless, even abstract conceptualizations are seen as emerging from embodied experiences. ...
Article
Full-text available
Recent work has shown that ASL (American Sign Language) signers not only articulate the language in the space in front of and around them, they interact with that space bodily, such that those interactions are frequently viewpointed. At a basic level, signers use their bodies to depict the actions of characters, either themselves or others, in narrative retelling. These viewpointed instances seem to reflect “embodied cognition”, in that our construal of reality is largely due to the nature of our bodies ( Evans and Green, 2006 ) and “embodied language” such that the symbols we use to communicate are “grounded in recurring patterns of bodily experience” ( Gibbs, 2017 : 450). But what about speakers of a spoken language such as English? While we know that meaning and structure for any language, whether spoken or signed, affect and are affected by the embodied mind (note that the bulk of research on embodied language has been about spoken, not signed, language), we can learn much about embodied cognition and viewpointed space when spoken languages are treated as multimodal. Here, we compare signed ASL and spoken, multimodal English discourse to examine whether the two languages incorporate viewpointed space in similar or different ways.
... Visual viewpoint can be decomposed into two categories: (1) a visual simulation extending from the "eyes" of a component in the system and (2) a visual simulation extending from a location separate from the modeled components in the system. We draw on the gesture literature (e.g., McNeill, 1992;Parrill, 2009;Stec, 2012) to characterize the former as a character viewpoint and the latter as an observer viewpoint (while cognizant that "characters" in STEM simulations could be inanimate components). We also attend to a third category: (3) hybrid viewpoints in which two viewpoints (e.g., character and observer; observer and observer) merge at once through digital overlays, multiple synchronized displays, or other yet-to-be-invented ways of commingling viewpoint. ...
... This is not unique to educational technologies. In gesture, individuals can depict scenes at once from two points of view (with chimera gestures; see Parrill, 2009;McNeill, 1992), for example, by gesturally hitting oneself to show how one character knocked another character on the head with an umbrella (the gesturer's arm represents the arm of one character and the gesturer's head represents the head of a different character). Similarly, filmmakers build suspense by switching between viewpoints, for example, by rotating between an individual stuck in a burning apartment and firefighters making their way to the scene (Sklar, 1994), and films for decades have employed split screen techniques, in which two angles on the same or different scene are simultaneously visible. ...
Article
Full-text available
This paper describes a framework for making explicit the design decisions in the development of immersive and interactive STEM learning technologies. This framework consists of three components: (1) visual viewpoint, the location from which a visual simulation depicts observable components; (2) embodied interaction, the ways in which a learner can physically engage with the simulation interface; and (3) learners’ roles, the purpose and the participation structure the technology presents to the learner. The recent literature on the design of STEM learning technologies is reviewed with the lens of how the three components have been leveraged and what, if any, rationale is provided for the design decisions that were made. The definition and review of each component is followed by a set of reflective questions intended to prompt researchers and designers to be more explicit about these decisions and the ways they are intended to impact student learning in both the design process and the reporting of their work. The paper concludes with a discussion of how the three components interact, and how their articulation can support theory building as well as the proliferation of more effective STEM learning technology designs.
... Para tanto, baseamo-nos no texto de Dancygier (2011) no qual ela aplica a Teoria dos Espaços Mentais (FAUCONNIER, 1994) nas análises das narrativas a partir de uma perspectiva cognitivista. Em seguida, buscamos articular os estudos em Ponto de Vista também com os estudos referentes à narrativa sob uma ótica multimodal, baseando-nos nas proposições de Verhagen (2010), Sweetser (2012) e Parrill (2009Parrill ( , 2012. Com base nas considerações realizadas por Parrill (2009), apresentamos uma descrição dos termos utilizados na descrição de ponto de vista e perspectiva, utilizando parâmetros dos estudos sobre a narrativa associados aos Estudos de Gesto e à Língua de Sinais Americana. ...
Article
Full-text available
Nesta dissertação, analisamos o comportamento dos gestos e da direção do olhar em narrativas multimodais do português brasileiro. Partimos da hipótese de que dois articuladores multimodais – especificamente, os gestos e a direção do olhar – configuram-se de modo independente em Espaços Mentais diferentes. A fim de demonstrar como se configuram esses articuladores em contextos narrativos no português brasileiro, selecionamos uma narrativa pessoal proferida pela atriz Marisa Orth no programa semanal “Que História é Essa, Porchat?”, do canal GNT. Em termos metodológicos, dividimos a narrativa em 11 ocorrências multimodais presentes em 3 blocos narrativos: 3 ocorrências presentes no bloco “Exposição”, a apresentação dos fatos iniciais; 5 ocorrências do bloco “Clímax” que se refere ao conflito, o momento culminante da história e 3 ocorrências do bloco “Desfecho”, a conclusão do conflito. Em primeiro lugar, analisamos as escolhas linguísticas de cada ocorrência, e em seguida, partimos para a análise multimodal das ocorrências, com respaldo na identificação e descrição dos núcleos (strokes) gestuais e, posteriormente, identificamos a forma do gesto, levando em consideração os parâmetros do Sistema Linguístico de Anotação Gestual (LASG). No que diz respeito à marcação de Espaços Mentais pelos gestos, utilizamos os parâmetros que relacionam os gestos com o segmento da narração em que determinada interação se aloca, seja ele parte de uma interação no aqui-agora (Ground) ou parte de uma interação no contexto narrativo (Espaço Narrativo). Já no que diz respeito à marcação de Espaços Mentais pela direção do olhar, utilizamos as categorizações que verificam se o olhar de um narrador representa o olhar de uma personagem (no Espaço Narrativo) ou o olhar do próprio narrador na interação (no Ground). Apenas nos casos em que os gestos marcaram o Espaço Narrativo, analisamos logo após, se eles se configuram como gestos do ponto de vista do observador (O-VPT) ou como gestos do ponto de vista da personagem (C-VPT). Ainda nesses casos, analisamos os gestos quanto à sua função dominante no interior da narrativa: representação, expressão ou apelo. Do mesmo modo, somente nos casos em que a direção do olhar marcou o Espaço Narrativo, analisamos, também, a sua função no interior da narrativa: i) encenação da personagem e ii) narração propriamente dita. Os resultados demonstraram que a marcação predominante de Espaços Mentais pelos gestos foi a marcação do Espaço Narrativo e que a marcação predominante de Espaços Mentais pela direção do olhar foi a marcação do Ground.
... A speaker might plan to express more than one perspective at the same time. In this vein, Parrill (2009) investigates dual viewpoint gestures (first noted by McNeill, 1992, see also Cassell et al., 1999 for discussion), i.e. gestures that simultaneously express more than one viewpoint. If gestures can express more than one viewpoint at the same time, it should also be possible to express more than one viewpoint in gesture and speech. ...
... Although there is some discussion about the possibilities of linguistic realizations of such multiple perspectives within the evidentiality literature (see Evans, 2005 andBergqvist, 2015 for some discussion), a systematic investigation of the linguistic tools to simultaneously convey multiple perspectives is still outstanding. As for the gestural realization of perspective, McNeill (1992), Cassell et al. (1999) and Parrill (2009) discuss exactly such examples of dual viewpoints. Cassell et al. (1999) present an example where someone hands something to himself while uttering she got something. ...
... Here, the arm and hand embodies the giver and the rest of the body the receiver. McNeill (1992) and Parrill (2009) discuss similar cases as well as cases where someone performs an OVG and a CVG at the same time. Parrill (2009) reports an example where the narrator talks about a character and impersonates the reported character by performing a body lean to mimic the action of this character, hence a CVG, and at the same time indicates the trajectory of a certain path taken by the character, clearly an OVG. ...
Article
Full-text available
In this paper, we investigate the question of whether and how perspective taking at the linguistic level interacts with perspective taking at the level of co-speech gestures. In an experimental rating study, we compared test items clearly expressing the perspective of an individual participating in the event described by the sentence with test items which clearly express the speaker’s or narrator’s perspective. Each test item was videotaped in two different versions: In one version, the speaker performed a co-speech gesture in which she enacted the event described by the sentence from a participant’s point of view (i.e. with a character viewpoint gesture). In the other version, she performed a co-speech gesture depicting the event described by the sentence as if it was observed from a distance (i.e. with an observer viewpoint gesture). Both versions of each test item were shown to participants who then had to decide which of the two versions they find more natural. Based on the experimental results we argue that there is no general need for perspective taking on the linguistic level to be aligned with perspective taking on the gestural level. Rather, there is clear preference for the more informative gesture.
... Some work in gesture research has focused on the analysis of gestural blends where the story character has a human or human-like body (e.g., McNeill, 1992;Parrill, 2009). Other gesture research has focused on viewpoint in representations of non-human processes such as mathematical graphs of polynomial functions (e.g., Gerofksy, 2010). ...
... The analysis of spatial FoR in the packet switching study showed that preserving space across gestures and body movements involved the use of prior gestures as anchors and grounds, whereas rewriting space involved the creation of new anchors and grounds. Because gesturers have leeway to create spatial maps of non-present scenes from any visual angle, to drift back and forth between angles, and to adopt different character and observer viewpoints (McNeill, 1992;Parrill, 2009), it is not a given that prior gestured components will function as grounds and anchors for upcoming gestures. Keeping gestures spatially congruent requires constructing, remembering, and then continuing to use the locations of previous gestured events, something people achieve in conversation (Stec & Huiskes, 2014) and during public storytelling (Haviland, 1993 In contrast, the students learning about bees in the augmented reality STEP system had to temporarily ignore some visual symbols in the environment, imagine the locations of other symbols, and coordinate how an angle relative to a ground and anchor in the hive related to an angle relative to a different ground and anchor in the meadow. ...
Article
Full-text available
Gesture is recognized as part of and integral to cognition. The value of gesture for learning is contingent on how it gathers meaning against the ground of other relevant resources in the setting—in short, how the body is laminated onto the surrounding environment. With a focus on lamination, this paper formulates an integrated theory of viewpoint and spatial reasoning; develops an embodied approach to documenting and understanding the live construction of students’ spatial models; and offers new implications for the teaching of spatially complex concepts. We start with a study of how undergraduate students playfully gesture the first-person movements of components of an engineering system, step out to depict how the system appears from the outside, and all the while track how the components of the system spatially interact in the open canvas of empty space around the body. Students who manage all three—switching from character viewpoints to observer viewpoints while maintaining a coherent organization of space—better learn the engineering concept. We then examine this process in the unscripted discourse of a classroom of 1st and 2nd graders pretend-playing as bees. This second study extends the analysis of interactions between spatial reasoning and viewpoint into unplanned teacher-student discourse (including adjustments in talk and action over time) and a materially rich setting. In all, the paper formulates an embodied learning framework that integrates viewpoint and spatial reasoning with implications for learning design.
... Some work in gesture research has focused on the analysis of gestural blends where the story character has a human or human-like body (e.g., McNeill, 1992;Parrill, 2009). Other gesture research has focused on viewpoint in representations of inanimate processes such as mathetical graphs of polynomial functions (e.g., Gerofsky, 2010). ...
... The analysis of spatial FoR in the packet switching study showed that preserving space across gestures and body movements involved the use of prior gestures as anchors and grounds, whereas rewriting space involved the creation of new anchors and grounds. Because gesturers have leeway to create spatial maps of non-present scenes from any visual angle, to drift back and forth between angles, and to adopt different character and observer viewpoints (McNeill, 1992;Parrill, 2009), it is not a given that prior gestured components will function as grounds and anchors for upcoming gestures. Keeping gestures spatially congruent requires constructing, remembering, and then continuing to use the locations of previous gestured events, something people achieve in conversation (Stec & Huiskes, 2014) and during public storytelling (Haviland, 1993). ...
Article
Gesture is recognized as part of and integral to cognition. The value of gesture for learning is contingent on how it gathers meaning against the ground of other relevant resources in the setting—in short, how the body is laminated onto the surrounding environment. With a focus on lamination, this paper formulates an integrated theory of viewpoint and spatial reasoning; develops an embodied approach to documenting and understanding the live construction of students’ spatial models; and offers new implications for the teaching of spatially complex concepts. We start with a study of how undergraduate students playfully gesture the first-person movements of components of an engineering system, step out to depict how the system appears from the outside, and all the while track how the components of the system spatially interact in the open canvas of empty space around the body. Students who manage all three—switching from character viewpoints to observer viewpoints while maintaining a coherent organization of space—better learn the engineering concept. We then examine this process in the unscripted discourse of a classroom of 1st and 2nd graders pretend-playing as bees. This second study extends the analysis of interactions between spatial reasoning and viewpoint into unplanned teacher-student discourse (including adjustments in talk and action over time) and a materially rich setting. In all, the paper formulates an embodied learning framework that integrates viewpoint and spatial reasoning with implications for learning design.