ArticlePDF Available

The Role of Gesture in Learning: Do Children Use Their Hands to Change Their Minds?

Authors:

Abstract

Adding gesture to spoken instructions makes those instructions more effective. The question we ask here is why. A group of 49 third and fourth grade children were given instruction in mathematical equivalence with gesture or without it. Children given in- struction that included a correct problem-solving strategy in gesture were signifi- cantly more likely to produce that strategy in their own gestures during the same in- struction period than children not exposed to the strategy in gesture. Those children were then significantly more likely to succeed on a posttest than children who did not produce the strategy in gesture. Gesture during instruction encourages children to produce gestures of their own, which, in turn, leads to learning. Children may be able to use their hands to change their minds.
The Role of Gesture in Learning:
Do Children Use Their Hands to Change Their Minds?
Susan M. Wagner & Susan Goldin-Meadow
University of Chicago
April 2004
Address for correspondence
Susan M. Wagner
Department of Psychology
University of Chicago
5848 S. University Ave.
Chicago IL 60637
swagner@uchicago.edu
Tel. 773.702.1562
Fax. 773.702.0886
ACKNOWLEDGEMENTS
This research was supported by a grant from the Spencer Foundation to Susan Goldin-
Meadow. We thank Stella Felix Lourenco for her help with conceptualizing the study
design, and Valerie Ellois and Danielle Parisi for their assistance with data collection and
coding. We are most grateful to the principals, teachers and children who made this
research possible.
Children's Gesture and Learning
2
ABSTRACT
Adding gesture to instruction makes that instruction more effective. The question
we ask here is why. Forty-nine 3rd and 4th grade children were given instruction in
mathematical equivalence with gesture or without it. Children given instruction
illustrating a correct problem-solving strategy in gesture were significantly more likely to
produce that strategy in their own gestures during the instruction period than children not
exposed to the strategy in gesture. Those children were then significantly more likely to
succeed on the posttest than children who did not produce the strategy in gesture.
Gesture in instruction encourages children to produce gestures of their own which, in
turn, may lead to learning. Children may be able to use their hands to change their
minds.
Children's Gesture and Learning
3
Gesture is often used in teaching contexts (Flevares & Perry, 2001; Goldin-
Meadow, Kim, & Singer, 1999; Neill, 1991) and, when it is, it promotes learning.
Children are more likely to profit from instruction when that instruction includes gesture
than when it does not (Church, Ayman-Nolley, & Estrade, 2004; Perry, Berch, &
Singleton, 1995; Singer & Goldin-Meadow, 2004; Valenzeno, Alibali, & Klatzky, 2003).
Why might gesture in instruction lead to learning? Multimodal presentation is, in general,
associated with learning (Mayer & Moreno, 1998). Moreover, listeners are often better
able to grasp a speaker’s message when that message is conveyed in gesture and speech
than when it is conveyed in speech alone (Goldin-Meadow et al., 1999; Goldin-Meadow
& Singer, 2003; Kelly, Barr, Church, & Lynch, 1999; Thompson & Massaro, 1986,
1994). Thus, the gestures teachers produce in instruction could help children understand
the words that accompany those gestures and, in this way, facilitate learning.
But the gestures teachers produce in instruction could also have an impact on
learning by encouraging children to produce gestures of their own. People have been
shown to mimic certain nonverbal behaviors that their conversational partners produce;
for example, they imitate their partner’s facial expressions or idiosyncratic motor
behaviors (Chartrand & Bargh, 1999). Perhaps people mimic their partner’s gestures as
well. If so, children may produce gestures of their own when they see their teachers
gesture. In turn, producing one’s own gestures could lead to learning.
Producing gesture has, in fact, been found to be associated with learning. For
example, children who are at a transitional point in acquiring a task frequently produce
gestures that convey information not found anywhere in their speech (Church & Goldin-
Meadow, 1986; Goldin-Meadow, Alibali & Church, 1993; Perry, Church & Goldin-
Meadow, 1988; Pine, Lufkin & Messer, 2004). As another example, children produce
more substantive gestures when they are asked to reason about objects than when asked
to merely describe those objects, that is, when they are asked to think more deeply about
a task (Alibali, Kita, & Young, 2000). Finally, children who express their budding
knowledge in gesture as they learn a task are more likely to retain their new knowledge
than children who do not use gesture in this way (Alibali & Goldin-Meadow, 1993).
Our first goal is to determine whether having teachers gesture while instructing
children increases the likelihood that the children will produce gestures of their own
during that instruction. We will find that it does. Our second goal then is to determine
whether the children who produce gestures of their own during instruction learn the task
more readily than the children who do not produce gestures.
METHOD
Participants
Forty-nine late third grade and early fourth grade children participated in the
study. An additional 19 children took the pretest but were successful on some of the
pretest problems and thus were eliminated from the study. Children were recruited
through public and private elementary schools in the Chicago area.
Children's Gesture and Learning
4
Procedure
Pretest. Children solved a pencil and paper pretest consisting of 6 mathematical
equivalence problems with equivalent addends (4 + 6 + 3 = 4 + __) and 6 mathematical
equivalence problems without equivalent addends (7 + 3 + 4 = 5 + __). None of the
children solved any of the pretest problems correctly. After children completed the
pretest, they explained their solutions to the 6 problems with equivalent addends to an
experimenter at a whiteboard.
Instruction. A second experimenter, the instructor, then conducted training
individually with each child at the whiteboard. The instructor showed the child how to
solve six mathematical equivalence problems. After each problem, the child was given a
different problem to solve and explain. The children thus solved six problems on their
own during the instruction period. The instructor taught the equalizer strategy on all of
the problems – the notion that the two sides of an equation need to be considered
separately and must be equal to one another. For example, on the problem 4 + 6 + 3 = __
+ 3, the teacher put 10 in the blank and said, "I wanted to make one side equal to the
other side. See, 4 plus 6 plus 3 equals 13 and 10 plus 3 equals 13. That's why I put 10 in
the blank."
Instruction varied along two dimensions. First, we manipulated whether the
instructor’s explanations contained gesture. In the Speech alone condition, the instructor
clasped her hands at her waist while giving the equalizer explanation in speech. In the
Speech + Gesture condition, the instructor swept her left hand under the left side of the
equation when she said “one side,” and then swept her right hand under the right side of
the equation when she said “the other side.” Second, we manipulated the children’s
attention to gesture and also their attention to speech. In the Copy-Gesture condition,
children were encouraged to copy the instructor’s gestures when they produced their own
explanations: "During your explanation, try to move your hands the way I did." In the
Copy-Speech condition, children were encouraged to copy the instructor’s words:
"During your explanation, try to say something like what I said." Children were
reminded of these instructions on each problem. A control group of children was given
instruction in mathematical equivalence but was not encouraged to copy the instructor.
This design resulted in five instructional conditions: (1) Speech alone, no copying
instructions; (2) Speech alone, child instructed to copy speech; (3) Speech + Gesture, no
copying instructions; (4) Speech + Gesture, child instructed to copy speech; (5) Speech +
Gesture, child instructed to copy gesture. Children were randomly assigned to one of the
five conditions prior to taking the pretest.
Posttest. Immediately after the instruction period, children completed a posttest
which was identical in form to the pretest, and was administered by the first
experimenter.
Coding
The speech and gesture that the children produced during the entire session were
transcribed and coded according to a previously developed system (Perry et al., 1988).
Speech and gesture were coded separately; speech was coded with the picture turned off,
Children's Gesture and Learning
5
and gesture was coded with the sound turned off. We counted the number of times each
child produced an equalizer strategy in speech or in gesture during the instruction period.
The children received credit for having produced an equalizer strategy even if they did
not copy the instructor’s speech or gesture exactly; 86% of the children’s equalizer
strategies in gesture were identical to the instructor’s, as were 46% of their equalizer
strategies in speech. When children varied from the instructor's spoken model, they
tended to state the equivalence of the two sides of the equation rather than go through the
addition steps. For example, for the 4 + 6 + 3 = __ + 3 a child might say, "This side is 13
and the other side is 13." When children varied from the instructor’s gesture model, they
tended to substitute points for the sweeping hands. For example, a child might point at
each number on the left side with the left hand, and then point at each number on the
right side with the right hand (rather than sweeping the left hand under the left side and
the right hand under the right side of the equation).
RESULTS
Does gesture in instruction encourage children to produce gestures of their own?
We begin by determining how many children produced the equalizer strategy in
gesture during the instruction period. We found that 18 of the 29 (62%) children who
were given a model for equalizer in gesture during training (i.e., children in the Speech +
Gesture condition) expressed the strategy in gesture during training, compared to only 4
of the 20 (20%) children who were not given a gestural model of equalizer (children in
the Speech alone condition), χ2(1)=8.64, p=.013. In addition to looking at individual
children, we also calculated the number of times each child produced the equalizer
strategy in gesture. Children in the Speech + Gesture condition produced more instances
of the equalizer strategy in gesture during training than children in the Speech alone
condition (2.1 vs. 0.4, F(1,47)=10.945, p=.002; see Figure 1). Importantly, children in
the Speech + Gesture condition and the Speech alone condition did not differ in how
much they gestured on the pretest – the proportion of children who gestured on the
pretest did not differ across the groups (70% vs. 86%, χ2(1)=1.91, ns), nor did their
number of explanations containing gesture (3.0 vs. 3.8, F(1,47)=1.54, ns); none of the
children in either group produced equalizer in gesture on the pretest.
Not surprisingly since children heard the equalizer strategy in speech in both
conditions, the two groups of children also did not differ in how often they produced
equalizer in speech during instruction: 27 of the 29 (93%) children in the Speech +
Gesture condition produced equalizer in speech, compared to 16 of the 20 (80%) children
in the Speech alone condition, (χ2(1)=1.89, ns). Moreover, children in both groups
produced approximately the same number of instances of the equalizer strategy in speech
(4.1 vs. 3.3, F(1,47)=1.90, ns, see Figure 1).
Thus, our attempts to manipulate children’s gesture by modeling gesture had the
desired effect – children who saw gesture produced gesture. In contrast, our efforts to
manipulate children’s gesture by asking them to gesture were not successful. Six of the 9
(67%) children who were given Speech + Gesture instruction and were asked to copy the
instructor’s gesture expressed equalizer in gesture, compared to 6 of the 10 (60%)
children in this condition who were asked to copy speech, and 6 of the 10 (60%) children
in the condition who were given no instruction to copy anything (χ2(2)=.12, ns).
Children's Gesture and Learning
6
Moreover, the number of times the children in the Speech + Gesture condition expressed
the equalizer strategy in gesture did not differ significantly across the 3 copying groups
(3.1, 1.7, 1.5; F(2,26)=1.59, ns).
Similarly, our requests for children to copy the instructor’s speech had no effect
(although this non-effect could reflect the fact that almost all of the children produced
equalizer in speech, i.e., there may have been a ceiling effect). In the Speech alone
condition, 9 of the 10 (90%) children asked to copy the instructor’s speech expressed
equalizer in speech, compared to 7 of the 10 (70%) children given no instruction to copy
speech (χ2(1)=.31, ns). In the Speech + Gesture condition, all 10 (100%) of the children
asked to copy the instructor’s speech expressed equalizer in speech, compared to 8 of the
9 (89%) children asked to copy gesture, and 9 of the 10 (90%) children given no
instruction to copy (χ2(2)=1.1, ns). Moreover, the number of times the children
expressed equalizer in speech did not differ significantly across the 5 groups (3.9, 2.7,
4.2, 4.7, 3.5; F(1,44)=1.25, ns).
Are children who gesture during instruction more likely to learn than children who do
not gesture?
Our next question was whether the children who expressed equalizer in gesture
during the instruction period were particularly likely to succeed on the posttest.
Collapsing the data across the five conditions, we divided children into three
groups: (1) 22 children who expressed the equalizer strategy in gesture during the
instruction period; all of these children also expressed the equalizer strategy in
speech. (2) 21 children who expressed the equalizer strategy in speech but not in
gesture during the instruction period. (3) 6 children who did not express the
equalizer strategy in either gesture or speech during the instruction period.
We then looked at how many children in each of these three groups succeeded on
the posttest. We measured success in two ways. First, we classified children as
succeeding on the posttest if they put the correct solution in the blank on 4 of the 6
problems. Using this criterion, we found that 19 of the 22 (86%) children who expressed
equalizer in gesture and speech were successful, compared to 8 of the 21 (38%) children
who expressed equalizer in only speech, and to none of the 6 children who did not
express equalizer at all (χ2(2)=18.51, p<.001). Second, we calculated the total number of
problems on the posttest that the children in each group answered correctly. Figure 2
presents the data. The children’s performance during instruction was significantly related
to their performance on the posttest (F(2,43)=13.71, p<.001). Children who expressed
equalizer in gesture and speech answered significantly more problems correctly than
children who expressed equalizer in speech alone (p=.004, Tukey’s HSD) who, in turn,
answered significantly more problems correctly than children who did not express
equalizer at all during instruction (p=.035). Moreover, children who expressed equalizer
in gesture and speech during instruction answered significantly more problems correctly
than children who never expressed equalizer during instruction (p<.001).
Importantly, we see this same pattern no matter what type of instruction the
children received. Table 1 presents the mean number of problems answered correctly by
children in each of the 5 conditions in the study; children are categorized according to the
modality in which they expressed the equalizer strategy during the instruction period.
Children's Gesture and Learning
7
Note that, in each of the conditions, children who produced equalizer in gesture and
speech during instruction answered more problems correctly than children who produced
equalizer in speech only during instruction who, in turn, answered more problems
correctly than children who did not produce equalizer at all during instruction.
Overall, children given instruction in Speech + Gesture tended to be more
successful on the posttest than children given instruction in Speech alone, but this
difference did not reach statistical significance for either measure of success: (1) 19 of
the 29 (65%) children given instruction in Speech + Gesture solved 4 of the 6 posttest
problems correctly, compared to 8 of the 20 (40%) children given instruction in Speech
alone (χ2(1)=3.12, p=.078). (2) Children given instruction in Speech + Gesture answered
3.7 problems on the posttest correctly, compared to 2.5 for children given instruction in
Speech alone (F(1,47)=2.27, ns). Thus, what really mattered in predicting learning was
not the instruction per se, but the effect that the instruction had on the children’s
performance during this period – in particular, whether the children produced the correct
problem-solving strategy in gesture as well as in speech during the instruction period.
The data suggest that producing a correct strategy in gesture has a positive effect
on learning above and beyond producing that same strategy in speech. To pursue this
hypothesis further, we correlated a child's posttest performance with the number of times
the child said and gestured the equalizer strategy. Posttest performance (the number of
problems answered correctly) was highly correlated with the number of times that
children both said (r =.34, p = .014) and gestured (r =.42, p<.01) the equalizer strategy.
However, saying and gesturing the equalizer strategy were also highly correlated (r
=.415, p <.01). We therefore conducted a partial correlation analysis to determine
whether the effects of speech and gesture were independent of one another. Controlling
for the number of times that the children gestured the equalizer strategy, the effect of
expressing equalizer in speech was no longer significant (r =.21, ns). In contrast,
controlling for the number of times that the children said the equalizer strategy, the effect
of expressing equalizer in gesture remained statistically significant (r =.33, p=.021).
Thus, the children's gesture behavior accounted for variability in learning that was not
accounted for by their speech behavior. The data provide support for the hypothesis that
children’s gestures contribute to learning independently of their words.
DISCUSSION
When given information in both gesture and speech, the children in our study
conveyed that information in their own gestures – and did so more often than when given
the information in speech alone. Indeed, children were more likely to gesture a correct
problem-solving strategy when they merely observed the teacher gesturing that strategy
than when they were explicitly asked to copy the teacher’s gestures. To our knowledge,
this is the first demonstration that gestures produced by one member of an interaction can
increase both the type and number of gestures produced by the other member. This
finding is consistent with studies reporting that individuals mimic other nonverbal
behaviors of their interlocutors (Chartrand & Bargh, 1999).
In turn, the gestures that the children produced during instruction seemed to have
an effect on their learning. Children who expressed a correct problem-solving strategy in
both gesture and speech were significantly more likely to answer the problem correctly
than children who expressed the correct strategy in speech alone, or than children who
Children's Gesture and Learning
8
did not express the strategy at all. Moreover, the gestures that the children produced
during instruction had an effect on learning above and beyond the effect that their words
had on learning. These findings provide support for the hypothesis that adding gesture to
instruction promotes learning, at least in part, because it encourages learners to produce
their own gestures.
Of course, it is possible that the children in our study who chose to gesture
equalizer during instruction were just those children who were particularly ready to learn
mathematical equivalence in the first place. If so, gesturing might have been reflecting
the child’s readiness to learn, rather than causing the learning. However, children in the
Speech + Gesture condition did not differ from children in the Speech alone condition in
the number of gestures they produced prior to instruction, suggesting that our
manipulation was instrumental in getting the children to gesture during instruction.
Moreover, the fact that only 4 of the children in the Speech alone condition produced the
equalizer strategy in gesture (compared to 18 of the children in the Speech + Gesture
condition) suggests that our manipulation shaped the kinds of gestures the children
produced during instruction. And it was the production of equalizer in gesture, not
gesturing overall, that predicted success on the posttest: 19 children gestured during
instruction but did not produce an equalizer strategy in gesture; only 8 (42%) of these
children succeeded on the posttest. In contrast, 19 of the 22 (86%) children who did
produce an equalizer strategy in gesture succeeded on the posttest (χ2(1)=8.88, p<.01.).
Thus, the gestures that the children saw during instruction influenced the types of
gestures that they themselves produced which, in turn, seemed to have an impact on
learning. These findings suggest that children’s gestures may be playing a causal role in
changing their knowledge.
There are several good reasons to believe that producing one’s own gestures
might contribute to learning. First, gesture production is associated with a reduction in
cognitive load: speakers expend less cognitive effort when gesturing while explaining a
math problem than when not gesturing (Goldin-Meadow, Nusbaum, Kelly, & Wagner,
2001; Wagner, Nusbaum, & Goldin-Meadow, in press). Gesturing while speaking thus
eases the burden of speech production, providing a learner with additional cognitive
resources that could be used to reflect on and store new representations.
Second, gesture production could directly change the on-line memory processes
involved in storing new representations. There is a robust finding in the memory
literature that performing an action enhances one's memory for that action – the subject-
performed task (SPT) effect (Cohen, 1981; Engelkamp & Zimmer, 1984). Recent
evidence suggests that sign language may engage the same mechanism. Hearing signers
remember action phrases that have been signed better than action phrases that have been
said, and the size of this effect is comparable to traditional SPT effects (von Essen &
Nilsson, 2003; Zimmer & Engelkamp, 2003). Gestures, as motor behaviors associated
with speech, might also engage this mechanism. In other words, gesturing while
speaking may create a more lasting representation in memory independent of, or in
conjunction with, reductions in cognitive load.
Finally, gesture production might encourage speakers to form imagistic
representations that can later be accessed. McNeill (1992) has argued that the act of
gesturing and speaking influences speakers’ on-line thought processes, and that gesturing,
in particular, induces imagistic processing. Producing gesture along with speech could
Children's Gesture and Learning
9
encourage children to form imagistic representations along with their verbal
representations. And children whose problem representations are supported by both
verbal and imagistic forms may be particularly likely to maintain those representations in
memory (Clark & Paivio, 1991).
Whatever the mechanism, it is clear that including gesture in instruction
encourages children to produce gestures of their own, and that producing one’s own
gestures is associated with learning. Children may thus be able to use their hands to
change their minds.
Children's Gesture and Learning
10
REFERENCES
Alibali, M. W., & Goldin-Meadow, S. (1993). Gesture-speech mismatch and mechanisms
of learning: What the hands reveal about a child's state of mind. Cognitive
Psychology, 25, 468-523.
Alibali, M. W., Kita, S., & Young, A. J. (2000). Gesture and the process of speech
production: We think, therefore we gesture. Language & Cognitive Processes, 15,
593-613.
Chartrand, T. L., & Bargh, J. A. (1999). The chameleon effect: The perception-behavior
link and social interaction. Journal of Personality & Social Psychology, 76, 893-
910.
Church, R. B., Ayman-Nolley, S., & Estrade, J. (2004). The effects of gestural instruction
on bilingual children. International Journal of Bilingual Education, in press.
Church, R. B., & Goldin-Meadow, S. (1986). The mismatch between gesture and speech
as an index of transitional knowledge. Cognition, 23, 43-71.
Clark, J. M., & Paivio, A. (1991). Dual coding theory and education. Educational
Psychology Review, 3, 149-210.
Cohen, R. L. (1981). On the generality of some memory laws. Scandinavian Journal of
Psychology, 22, 267-281.
Engelkamp, J., & Zimmer, H. D. (1984). Motor programme information as a separable
memory unit. Psychological Research, 46, 283-299.
Flevares, L.M., & Perry, M. (2001). How many do you see? The use of nonspoken
representations in first-grade mathematics lessons. Journal of Educational
Psychology, 93, 330-345.
Goldin-Meadow, S., Alibali, M. W., & Church, R. B. (1993). Transitions in concept
acquisition: Using the hand to read the mind. Psychological Review, 100, 279-
297.
Goldin-Meadow, S., Kim, S., & Singer, M. (1999). What the teacher's hands tell the
student's mind about math. Journal of Educational Psychology, 91, 720-730.
Goldin-Meadow, S., Nusbaum, H., Kelly, S. D., & Wagner, S. (2001). Explaining math:
Gesturing lightens the load. Psychological Science, 12, 516-522.
Goldin-Meadow, S., & Singer, M. A. (2003). From children’s hands to adults’ ears:
Gesture’s role in teaching and learning. Developmental Psychology, 39, 509-520.
Kelly, S. D., Barr, D. J., Church, R. B., & Lynch, K. (1999). Offering a hand to pragmatic
understanding: The role of speech and gesture in comprehension and memory.
Journal of Memory & Language, 40, 577-592.
Mayer, R. E., & Moreno, R. (1998). A split-attention effect in multimedia learning:
Evidence for dual processing systems in working memory. Journal of
Educational Psychology, 90, 312-320.
McNeill, D. (1992). Hand and mind: What gestures reveal about thought. Chicago: The
University of Chicago Press.
Neill, S. (1991). Classroom nonverbal communication. London: Routledge.
Perry, M., Berch, D. B., & Singleton, J. L. (1995). Constructing Shared Understanding:
The Role of Nonverbal Input in Learning Contexts. Jounral of Contemporary
Legal Issues, Spring, 213-236.
Children's Gesture and Learning
11
Perry, M., Church, R. B., & Goldin-Meadow, S. (1988). Transitional knowledge in the
acquisition of concepts. Cognitive Development, 3, 359-400.
Pine, K.J., Lufkin, N., & Messer, D. (2004). More gestures than answers: Children
learning about balance. Developmental Psychology, revision under review.
Singer, M.A., & Goldin-Meadow, S. (2004). Children learn when their teacher’s gestures
differ from speech. Under review.
Thompson, L. A., & Massaro, D. W. (1986). Evaluation and integration of speech and
pointing gestures during referential understanding. Journal of Experimental Child
Psychology, 42, 144-168.
Thompson, L. A., & Massaro, D. W. (1994). Children's integration of speech and
pointing gestures in comprehension. Journal of Experimental Child Psychology,
57, 327-354.
Valenzeno, L., Alibali, M. W., & Klatzky, R. (2003). Teachers' gestures facilitate
students' learning: A lesson in symmetry. Contemporary Educational Psychology,
28, 187-204.
von Essen, J. D., & Nilsson, L.-G. (2003). Memory effects of motor activation in subject-
performed tasks and sign language. Psychonomic Bulletin & Review, 10, 445-449.
Wagner, S. M., Nusbaum, H., & Goldin-Meadow, S. (in press). Probing the Mental
Representation of Gesture: Is Handwaving Spatial? Journal of Memory &
Language.
Zimmer, H. D., & Engelkamp, J. (2003). Signing enhances memory like performing
actions. Psychonomic Bulletin & Review, 10, 450-454.
Children's Gesture and Learning
12
Table 1
Number Correct on Posttest as a Function of Experimental Condition and Child’s Production of Equalizer during Instruction a.
Children Classified Experimenter Modeled Experimenter Modeled
According to their Equalizer in Speech Equalizer in Speech and in Gesture
Production of Equalizer (EQ) No Instructions Instructions No Instructions Instructions Instructions
During Instruction to Copy to Copy Speech to Copy to Copy Speech to Copy Gesture
Never expressed EQ 0.00 (N=3) 0.00 (N=1) 0.00 (N=1) - (N=0) 0.00 (N=1)
Expressed EQ in Speech Only 3.60 (N=5) 2.43 (N=7) 2.33 (N=3) 1.50 (N=4) 2.50 (N=2)
Expressed EQ in Speech + Gesture 5.50 (N=2) 2.50 (N=2) 5.50 (N=6) 4.67 (N=6) 4.83 (N=6)
Total 2.90 (N=10) 2.20 (N=10) 4.00 (N=10) 3.40 (N=10) 3.78 (N=9)
a. The mean number of correct answers given on the posttest by children in each of the five conditions. Children are classified according to the
modality in which they expressed the equalizer strategy during the instruction period. Numbers in parentheses are the number of children who
contributed to each mean.
Children's Gesture and Learning
14
Speech in Instruction Speech+Gesture in Instruction
0
1
2
3
4
5
Expressed EQ in Gesture
Expressed EQ in Speech
Child's Behavior During Instruction
Instruction Given to Child
Mean Number of Instances of Equalizer Expressed by Child
Figure 1. Mean number of equalizer explanations children produced when given
instruction with gesture and without it. Children given instruction in speech and gesture
expressed significantly more equalizer strategies in gesture than children given
instruction in speech alone. There were no significant differences between the groups in
number of equalizer strategies expressed in speech. Error bars represent standard errors.
Children's Gesture and Learning
15
No EQ EQ in Speech EQ in Speech & Gesture
0
1
2
3
4
5
6Child's Behavior on Posttest
Child's Production of Equalizer During Instruction
Mean Number Correct on Posttest
Figure 2. Mean number of correct answers on the posttest. Children are categorized
according to the modality in which they produced the equalizer strategy during the
instruction period. Children who produced equalizer in gesture and speech produced
significantly more correct answers than children who produced equalizer only in speech
who, in turn, produced more correct answers than children who did not produce
equalizer at all. Error bars represent standard errors.
... Spontaneous gestures have been shown to predict conceptual reasoning and learning by contributing to different types of mathematical reasoning (e.g., Cook & Goldin-Meadow, 2006;Ottmar & Landy, 2017;Smith et al., 2014). For example, children encouraged to gesture while explaining their solutions to mathematics equivalence problems were more likely to express new and correct problem-solving strategies compared to those told not to gesture and those told to explain their solutions with no mention of gestures (Broaders et al., 2007). ...
... Some studies have shown that directed actions from earlier training leave a legacy in gesture production in subsequent performance (Donovan et al., 2014). Cook and Goldin-Meadow (2006) found that children were more likely to produce gestures when given instructions that included actions about a solution strategy. Moreover, children's gestures were "picking up on, and reproducing, the content of the instructor's gesture" (p. ...
... This study also explored the influences on embodied mathematical reasoning in terms of the history of these movements. Many embodied curricula and game-based interventions use externally generated movements to bring about the desired behaviors, prompting students to mimic the actions of another, touch certain locations, or follow certain patterns (e.g., Cook et al., 2006;Nathan et al., 2014;Thomas & Lleras, 2007). The present study is one of the very few that compare the effects of performing externally generated directed actions to performing internally generated predicted actions. ...
... Gestures can be used to illustrate mathematical processes, illustrate relationships between concepts, or visualize solutions to mathematical problems. In many situations, hand (Cook & Goldin-Meadow, 2009). In many situations, hand gestures become an additional way to communicate and understand mathematics. ...
Article
Full-text available
Gestures that can help students communicate mathematically can be called gestures. This study aims to find out gestures that can help deaf students when solving math problems. This research uses a descriptive approach with a qualitative research type. The data sources used are test questions, observations, and interviews. This research was conducted at SLB ABC Balung with the subject of research as many as 2 SMALB students. The results of this study show that both subjects have their own characteristics when working on the problem. The first student prefers to count using movements assisted by scribbles, while the second student prefers to use hand gestures. In the first student, gestures occurred 38 times, consisting of iconic gestures 17 times, metaphorical gestures 10 times, and deictic gestures 11 times. In the second student, gestures occurred 40 times, consisting of iconic gestures 13 times, metaphorical gestures 5 times, and deictic gestures 22 times. Based on the results of this study, the gesture most often used when deaf students solve the problem is the deictic gesture, which occurs 33 times, and the gesture that appears the least is the metaphorical gesture, which occurs 15 times.
... Previous research on auditorily and visually conditioned research has demonstrated the combined audiovisual signals are integrated in the comprehension of the speech contents. Compared to unimodal stimuli, the multimodal stimuli such as pairs of speech and gestures provide an audience with more information that helps them to identify speakers' emotions 11 helps children learn 19 , facilitate presentations and discussions among adults 12,13 , and aids them in understanding the complex contents of speech 20 . In contrast to the previously reported positive impact of audiovisual integration, our results demonstrated a negative effect: the CAM condition received no better evaluation than in the VO condition. ...
Article
Full-text available
During the pandemic, digital communication became paramount. Due to the discrepancy between the placement of the camera and the screen in typical smartphones, tablets and laptops, mutual eye contact cannot be made in standard video communication. Although the positive effect of eye contact in traditional communication has been well-documented, its role in virtual contexts remains less explored. In this study, we conducted experiments to gauge the impact of gaze direction during a simulated online job interview. Twelve university students were recruited as interviewees. The interview consisted of two recording sessions where they delivered the same prepared speech: in the first session, they faced the camera, and in the second, they directed their gaze towards the screen. Based on the recorded videos, we created three stimuli: one where the interviewee’s gaze was directed at the camera (CAM), one where the interviewee’s gaze was skewed downward (SKW), and a voice-only stimulus without camera recordings (VO). Thirty-eight full-time workers participated in the study and evaluated the stimuli. The results revealed that the SKW condition garnered significantly less favorable evaluations than the CAM condition and the VO condition. Moreover, a secondary analysis indicated a potential gender bias in evaluations: the female evaluators evaluated the interviewees of SKW condition more harshly than the male evaluators did, and the difference in some evaluation criteria between the CAM and SKW conditions was larger for the female interviewees than for the male interviewees. Our findings emphasize the significance of gaze direction and potential gender biases in online interactions.
... Students and teachers can make gestures in mathematics classrooms to directly represent mathematics objects and ideas, as well as to point to mathematical representations (Alibali & Nathan, 2012). Research has shown that the use of gestures by students and teachers while explaining or exploring a concept is correlated with stronger mathematical reasoning (Cook & Goldin-Meadow, 2006;Goldin-Meadow, 2005;Nathan et al., 2014). Gestures in the math classroom could be displayed as a student creating a triangle with their hands by putting their pointer fingers together and thumbs together to represent the geometric shape and illustrate acute angles. ...
Preprint
Full-text available
In this chapter, we explore the use of Augmented Reality (AR) and Virtual Reality (VR) in geometry courses for teachers. There is increasing use of AR and VR technologies, such as AR and VR goggles, in K-12 schools, providing new opportunities for learners to interact with geometry in three-dimensional (3D) environments. These opportunities lead to a need for teacher training related to these technologies. Further, AR and VR technologies offer teachers opportunities to understand geometry concepts themselves more deeply. We connect AR/VR technologies to theories of embodied cognition, as well as to the interaction with geometric objects that AR/VR allows. This chapter presents illustrative examples and observations from the GeT courses, highlighting how teachers engage with properties of 2D and 3D shapes in AR/VR and collaborate with others around dynamic simulations. We emphasize the potential of these environments to enhance geometry learning by allowing learners to manipulate shapes and observe real-time changes in measurements and figures. Further, we focus on the affordances and constraints of AR/VR for classroom use, what teachers can learn about using AR/VR technology in their classroom in GeT courses, and what teachers can learn about geometry from AR/VR. Particularly, we discuss deriving and explaining geometric arguments through teacher experiences with AR/VR technologies and provide recommendations for geometry teachers and teacher educators.
... Furthermore, embodied cognition theory proposes that adults and children think and learn in essentially the same way by apprehending and using their bodies. Just as much as adults use gesture and the experience of their bodies situated in sociocultural space to learn and to communicate with each other, so children do in areas such as language learning (Toumpaniari et al., 2015) and mathematics problem solving (Cook & GoldinMeadow, 2006;Ruiter et al., 2015). • Socioculturalism, as indicated earlier, contends that learners are situated in CoPs and social activity systems that constitute their knowledge via apprenticeship or simi lar learning processes. ...
Article
Full-text available
The theory of andragogy has had considerable purchase amongst adult educators over time. Although it differs in emphasis in its north American and eastern European poles, the theory derives from a psychological distinction between the way that adults and children learn. Defining the theory in the terms of its most influential theorist, Malcolm Knowles, this article develops a critique of andragogy in relation to mainstream theories of learning. Knowles argues adults are psychologically disposed to "immediate", life-orientated learning, whereas children's learning has a "postponed" developmental orientation to the future. However, this particular adult-child distinction has little veracity or credibility when considered against mainstream theories of learning. Rather than a purported cleavage between the learning of adults and children, it seems that Knowles is actually driving at a distinction between non-formal and formal teaching methods, and that this is a better way of thinking about the distinctiveness of adult education than any insights that "andragogy" may have to offer.
... A large number of studies also linked the early development of spatial skills to several domains requested in the field of STEM education such as math knowledge, problem solving abilities, mental rotation competence, and executive functions. Research conducted with young children has demonstrated the relation between gesturing and language learning (Iverson & Goldin-Meadow, 2005) as well as with mathematics education (Cook & Goldin-Meadow, 2006). Gestures have been found to be also beneficial during complex learning, such as learning of abstract concepts (Malinverni &Pares, 2014) and mathematics domain (Macedonia, 2019). ...
Article
Full-text available
From early childhood, children begin to explore and interact with the physical environment, gradually improving their spatial skills. Understanding how this process occurs is essential for teachers because they can effectively support the development of these skills in the educational context. Indeed, corporeity is an increasingly central dimension of educational experience and can be used as a pedagogical tool to foster active learning and skill acquisition, especially within the STEM disciplines. The aim of this contribution is to thematize, within the theoretical framework of Embodied Cognition, how it is necessary to anchor the learning of concepts related to STEM disciplines to dynamics afferent to the bodily dimension. Indeed, this enables the pursuit of broader pedagogical goals: on the one hand, emphasizing the role of bodily experiences in shaping our cognitive processes, and on the other hand, encouraging the design of learning scenarios and educational technologies rooted in sensorimotor experience and action.
Article
Gesture and speech are tightly linked and form a single system in typical development. In this review, we ask whether and how the role of gesture and relations between speech and gesture vary in atypical development by focusing on two groups of children: those with peri‐ or prenatal unilateral brain injury (children with BI) and preterm born (PT) children. We describe the gestures of children with BI and PT children and the relations between gesture and speech, as well as highlight various cognitive and motor antecedents of the speech‐gesture link observed in these populations. We then examine possible factors contributing to the variability in gesture production of these atypically developing children. Last, we discuss the potential role of seeing others’ gestures, particularly those of parents, in mediating the predictive relationships between early gestures and upcoming changes in speech. We end the review by charting new areas for future research that will help us better understand the robust roles of gestures for typical and atypically‐developing child populations.
Article
Full-text available
Public speakers like politicians carefully craft their words to maximize the clarity, impact, and persuasiveness of their messages. However, these messages can be shaped by more than words. Gestures play an important role in how spoken arguments are perceived, conceptualized, and remembered by audiences. Studies of political speech have explored the ways spoken arguments are used to persuade audiences and cue applause. Studies of politicians’ gestures have explored the ways politicians illustrate different concepts with their hands, but have not focused on gesture's potential as a tool of persuasion. Our paper combines these traditions to ask first, how politicians gesture when using spoken rhetorical devices aimed at persuading audiences, and second, whether these gestures influence the ways their arguments are perceived. Study 1 examined two rhetorical devices— contrasts and lists —used by three politicians during U.S. presidential debates and asked whether the gestures produced during contrasts and lists differ. Gestures produced during contrasts were more likely to involve changes in hand location, and gestures produced during lists were more likely to involve changes in trajectory. Study 2 used footage from the same debates in an experiment to ask whether gesture influenced the way people perceived the politicians’ arguments. When participants had access to gestural information, they perceived contrasted items as more different from one another and listed items as more similar to one another than they did when they only had access to speech. This was true even when participants had access to only gesture (in muted videos). We conclude that gesture is effective at communicating concepts of similarity and difference and that politicians (and likely other speakers) take advantage of gesture's persuasive potential.
Article
Full-text available
At what point in the process of speech production is gesture involved? According to the Lexical Retrieval Hypothesis, gesture is involved in generating the surface forms of utterances. Specifically, gesture facilitates access to items in the mental lexicon. According to the Information Packaging Hypothesis, gesture is involved in the conceptual planning of messages. Specifically, gesture helps speakers to ''package'' spatial information into verbalisable units. We tested these hypotheses in 5-year-old children, using two tasks that required comparable lexical access, but different information packaging. In the explanation task, children explained why two items did or did not have the same quantity (Piagetian conservation). In the description task, children described how two items looked different. Children provided comparable verbal responses across tasks; thus, lexical access was comparable. However, the demands for information packaging differed. Participants' gestures also differed across the tasks. In the explanation task, children produced more gestures that conveyed perceptual dimensions of the objects, and more gestures that conveyed information that differed from the accompanying speech. The results suggest that gesture is involved in the conceptual planning of speech.
Article
Full-text available
Studies investigating the role gesture plays in communication claim gesture has a minimal role, while others claim that gesture carries a large communicative load. In these studies, however, the role of gesture has been assessed in a context where speech is understood and could easily carry the entire communicative burden. We examine the role of gesture when speech is inaccessible to the listener. We investigated a population of children who, by their circumstances, are exposed to a language that is not accessible to them: Spanish-speaking students in an English-speaking school. Fifty-one first grade English-speaking students and Spanish-speaking students were tested. Half of the English-speaking and half of Spanish-speaking students viewed a 'speech only' math instructional tape (i.e.instruction was not accompanied by gesture), while the other half of the English-speaking and Spanishspeaking students viewed a 'speech and gesture' instructional tape. We found that learning increased two-fold for all students when gesture accompanied speech instruction, increasing Spanish-speaking learning from 0% to 50%. We speculate that gesture improved learning for Spanish-speaking children because gestural representation is not tied to a particular language. Rather, gesture reflects concepts in the form of universal representations. Implications for the communicative function of gesture are discussed.
Article
Full-text available
Patterns of speech-related (“coverbal”) gestures were investigated in two groups of right-handed, brain-damaged patients and in matched controls. One group of patients ("ldquo;aphasic”) had primarily anomic deficits and the other (“visuo-spatial”) had visual and spatial deficits, but not aphasia. Coverbal gesture was video-recorded during the description of complex pictures and analysed for physical properties, timing in relation to speech and ideational content. Aphasic patients produced a large amount of ideational gestures relative to their lexical production and pictorial input, whereas the related production of the visuo-spatial patients was small. Controls showed intermediate values. The composition of ideational gestures was similar in the aphasic and control groups, while visual subjects produced less iconic gestures (i.e. less gestures which show in their form the content of a word or phrase). We conclude that ideational gestures probably facilitate word retrieval, as well as reflect the transfer of information between propositional and nonpropositional representations during message construction. We suggest that conceptual and linguistic representations should probably be re-encoded in a visuo-spatial format to produce ideational gestures.
Article
Gestures may provide the long sought-for bridge between science laboratory experiences and scientific discourse about abstract entities. In this article, we present our results of analyzing students' gestures and scientific discourse by supporting three assertions about the relationship between laboratory experiences, gestures, and scientific discourse: (1) gestures arise from the experiences in the phenomenal world, most frequently express scientific content before students master discourse, and allow students to construct complex explanations by lowering the cognitive load; (2) gestures provide a medium on which the development of scientific discourse can piggyback; and (3) gestures provide the material that “glues” layers of perceptually accessible entities and abstract concepts. Our work has important implications for laboratory experiments which students should attempt to explain while still in the lab rather than afterwards and away from the materials. © 2000 John Wiley & Sons, Inc. J Res Sci Teach 38: 103–136, 2001
Article
In 4 experiments, students who read expository passages with seductive details (i.e., interesting but irrelevant adjuncts) recalled significantly fewer main ideas and generated significantly fewer problem-solving transfer solutions than those who read passages without seductive details. In Experiments 1, 2, and 3, revising the passage to include either highlighting of the main ideas, a statement of learning objectives, or signaling, respectively, did not reduce the seductive details effect. In Experiment 4, presenting the seductive details at the beginning of the passage exacerbated the seductive details effect, whereas presenting the seductive details at the end of the passage reduced the seductive details effect. The results suggest that seductive details interfere with learning by priming inappropriate schemas around which readers organize the material, rather than by distracting the reader or by disrupting the coherence of the passage.
Article
The chameleon effect refers to nonconscious mimicry of the postures, mannerisms, facial expressions, and other behaviors of one's interaction partners, such that one's behavior passively rind unintentionally changes to match that of others in one's current social environment. The authors suggest that the mechanism involved is the perception-behavior link, the recently documented finding (e.g., J. A. Bargh, M. Chen, & L. Burrows, 1996) that the mere perception of another' s behavior automatically increases the likelihood of engaging in that behavior oneself Experiment 1 showed that the motor behavior of participants unintentionally matched that of strangers with whom they worked on a task. Experiment 2 had confederates mimic the posture and movements of participants and showed that mimicry facilitates the smoothness of interactions and increases liking between interaction partners. Experiment 3 showed that dispositionally empathic individuals exhibit the chameleon effect to a greater extent than do other people.
Article
Perry, M., Berch, D., & Singleton, J. L. (1995). Constructing shared understanding: The role of nonverbal input in learning contexts. , 6, 213-235.
Article
: Gestures may provide the long sought-for bridge between science laboratory experiences and scientific discourse about abstract entities. In this article, we present our results of analyzing students' gestures and scientific discourse by supporting three assertions about the relationship between laboratory experiences, gestures, and scientific discourse: (1) gestures arise from the experiences in the phenomenal world, most frequently express scientific content before students master discourse, and allow students to construct complex explanations by lowering the cognitive load; (2) gestures provide a medium on which the development of scientific discourse can piggyback; and (3) gestures provide the material that glues layers of perceptually accessible entities and abstract concepts. Our work has important implications for laboratory experiments which students should attempt to explain while still in the lab rather than afterwards and away from the materials.
Article
Students viewed a computer-generated animation depicting the process of lightning formation (Experiment 1) or the operation of a car's braking system (Experiment 2). In each experiment, students received either concurrent narration describing the major steps (Group AN) or concurrent on-screen text involving the same words and presentation timing (Group AT). Across both experiments, students in Group AN outperformed students in Group AT in recalling the steps in the process on a retention test, in finding named elements in an illustration on a matching test, and in generating correct solutions to problems on a transfer test. Multimedia learners can integrate words and pictures more easily when the words are presented auditorily rather than visually. This split-attention effect is consistent with a dual-processing model of working memory consisting of separate visual and auditory channels.
Article
Most theories of pragmatics take as the basic unit of communication the verbal content of spoken or written utterances. However, many of these theories have overlooked the fact that important information about an utterance's meaning can be conveyed nonverbally. In the present study, we investigate the pragmatic role that hand gestures play in language comprehension and memory. In Experiments 1 and 2, we found that people were more likely to interpret an utterance as an indirect request when speech was accompanied by a relevant pointing gesture than when speech or gesture was presented alone. Following up on this, Experiment 3 supported the idea that speech and gesture mutually disambiguate the meanings of one another. Finally, Experiment 4 generalized the findings to different types of speech acts (recollection of events) with a different type of gesture (iconic gestures). The results from these experiments suggest that broader units of analysis beyond the verbal message may be needed in studying pragmatic understanding.