ArticlePDF AvailableLiterature Review

Practical lessons for phone-based assessments of learning

BMJ
BMJ Global Health
Authors:

Abstract and Figures

School closures affecting more than 1.5 billion children are designed to prevent the spread of current public health risks from the COVID-19 pandemic, but they simultaneously introduce new short-term and long-term health risks through lost education. Measuring these effects in real time is critical to inform effective public health responses, and remote phone-based approaches are one of the only viable options with extreme social distancing in place. However, both the health and education literature are sparse on guidance for phone-based assessments. In this article, we draw on our pilot testing of phone-based assessments in Botswana, along with the existing literature on oral testing of reading and mathematics, to propose a series of preliminary practical lessons to guide researchers and service providers as they try phone-based learning assessments. We provide preliminary evidence that phone-based assessments can accurately capture basic numeracy skills. We provide guidance to help teams (1) ensure that children are not put at risk, (2) test the reliability and validity of phone-based measures, (3) use simple instructions and practice items to ensure the assessment is focused on the target skill, not general language and test-taking skills, (4) adapt the items from oral assessments that will be most effective in phone-based assessments, (5) keep assessments brief while still gathering meaningful learning data, (6) use effective strategies to encourage respondents to pick up the phone, (7) build rapport with adult caregivers and youth respondents, (8) choose the most cost-effective medium and (9) account for potential bias in samples.
This content is subject to copyright. Terms and conditions apply.
1
AngristN, etal. BMJ Global Health 2020;5:e003030. doi:10.1136/bmjgh-2020-003030
Practical lessons for phone- based
assessments of learning
Noam Angrist,1,2 Peter Bergman,3 David K Evans ,4 Susannah Hares,5
Matthew C H Jukes,6 Thato Letsomo2
Practice
To cite: AngristN, BergmanP,
EvansDK, etal. Practical
lessons for phone- based
assessments of learning.
BMJ Global Health
2020;5:e003030. doi:10.1136/
bmjgh-2020-003030
Handling editor Seye Abimbola
Received 28 May 2020
Revised 6 July 2020
Accepted 7 July 2020
1University of Oxford, Oxford, UK
2Young 1ove, Gaborone,
Botswana
3Teachers College of Columbia
University, New York, New York,
USA
4Center for Global Development,
Washington, DC, USA
5Center for Global Development,
London, UK
6International Education
Division, RTI International,
London, UK
Correspondence to
Dr David K Evans;
devans@ cgdev. org
© Author(s) (or their
employer(s)) 2020. Re- use
permitted under CC BY.
Published by BMJ.
ABSTRACT
School closures affecting more than 1.5 billion children
are designed to prevent the spread of current public
health risks from the COVID-19 pandemic, but they
simultaneously introduce new short- term and long- term
health risks through lost education. Measuring these
effects in real time is critical to inform effective public
health responses, and remote phone- based approaches
are one of the only viable options with extreme social
distancing in place. However, both the health and
education literature are sparse on guidance for phone-
based assessments. In this article, we draw on our pilot
testing of phone- based assessments in Botswana, along
with the existing literature on oral testing of reading and
mathematics, to propose a series of preliminary practical
lessons to guide researchers and service providers as
they try phone- based learning assessments. We provide
preliminary evidence that phone- based assessments can
accurately capture basic numeracy skills. We provide
guidance to help teams (1) ensure that children are not put
at risk, (2) test the reliability and validity of phone- based
measures, (3) use simple instructions and practice items
to ensure the assessment is focused on the target skill,
not general language and test- taking skills, (4) adapt the
items from oral assessments that will be most effective
in phone- based assessments, (5) keep assessments brief
while still gathering meaningful learning data, (6) use
effective strategies to encourage respondents to pick up
the phone, (7) build rapport with adult caregivers and youth
respondents, (8) choose the most cost- effective medium
and (9) account for potential bias in samples.
INTRODUCTION
School closures around the world due to
COVID-19—with more than 1.5 billion
learners affected—pose the potential to
add a second public health challenge to
the pandemic.1 In the short run, school
closures are associated with rises in adoles-
cent pregnancy.2 School closures also lead to
dropout, with adverse impacts on subsequent
health behaviours and health status.3 To
keep students engaged and learning, educa-
tion systems have rolled out a wide variety
of distance- learning platforms: television
programmes, radio programmes, web- based
instruction, phone tutorials from teachers
and others.4 Existing studies have measured
how much children are engaging with educa-
tional content.5 6 But how much are they actu-
ally learning? Students commonly fall behind
during school closures7 8 and that can also
increase dropout rates.9 Children do not lose
learning equally: children from high- income
families gain learning during school closures,
whereas children from low socioeconomic
backgrounds lose the equivalent of several
months of learning.10
Ongoing research projects in low- income
and middle- income countries, where internet
access can be both limited and inconsistent,
seek to evaluate student learning by phone
during the COVID-19 school closures to avoid
putting assessors and youth at risk. There is
a limited history of phone- based behavioural
and learning assessments. Several studies have
assessed the validity of phone- based assess-
ments of cognitive function among elderly
patients,11 including one study of literacy
assessments in adults.12 Other studies have
enabled community health workers to assess
and report child health.13 14 We are not aware
Summary box
Assessing children and youth remotely is essential
to mitigating the adverse short- term and long- term
public health and education impacts of the COVID-19
pandemic, as well as future school closures due to
health and other crises.
There is existing literature on best practice strate-
gies to carry out phone- based surveys of adults, on
oral face- to- face testing of learning among children
and youth, and on using technology to help com-
munity health workers identify ill or at- risk children.
However, there is little evidence on assessing learn-
ing among children and youth over the phone.
Pilot experience with phone- based testing among
our team, together with experience with oral assess-
ments and phone- based surveys, provides prelim-
inary guidance to orient those who would assess
learning for out- of- school children when face- to-
face assessments pose a public health risk.
2AngristN, etal. BMJ Global Health 2020;5:e003030. doi:10.1136/bmjgh-2020-003030
BMJ Global Health
of any published studies on direct learning or health
assessments of children that were administered by phone.
This article combines past research and experience with
oral assessments, with ongoing piloting of phone- based
assessments in a middle- income setting (Botswana) to
propose a series of preliminary principles for the assess-
ment of learning by phone.
Assessment by phone is a nascent field of research and
much will be learnt in the current crisis and beyond.15 16
In addition to ongoing work in Botswana, authors of this
paper are associated with efforts in Sierra Leone and
Tanzania in partnership with the Center for Global Devel-
opment and RTI International, and other teams in other
countries are also implementing pilots. Not all learning
can be assessed by phone; understanding which domains
of learning can be assessed with validity and improving
the quality of these assessments may open the door to
a more cost- effective measurement of student learning
even after schools reopen. This article seeks to integrate
principles from the existing literature on face- to- face
assessments with findings from a pilot study of phone-
based assessments in Botswana to propose an initial set
of guidance on which future research can build. These
lessons also have applications for assessing child health
by phone.
PILOT PHONE-BASED AND COMPLEMENTARY ASSESSMENTS
IN BOTSWANA
In our piloting effort, Young 1ove—one of the largest
non- government organisations in Botswana—worked in
partnership with the Ministry of Basic Education to collect
over 10 000 phone numbers in schools in 4 out of 10
regions in Botswana before schools closed for lockdown.
Since schools have been closed, caregivers and students
have been contacted to participate in remote learning
interventions rolled out as a randomised controlled trial
in partnership with Columbia University and the Abdul
Latif Jameel Poverty Action Lab.17
In this paper, we draw on two assessments from our
work in Botswana, a phone- based assessment and a face-
to- face assessment (from before schools shut down). The
phone- based assessments were administered to 2250
students who were in grades 3–5 before schools shut
down. They were conducted by over 70 former teacher
aides (which we will subsequently refer to as ‘assessors’)
who call households directly. Training for all assessors
was conducted using voice notes and sharing of written
material via WhatsApp. During the survey, assessors call
a parent and request to speak to the child at the house-
hold. They request that the parent provide the child with
privacy and make clear the questions are ‘low stakes’ in
that they have no reward and consequences associated, in
order to facilitate honest responses. Numeracy questions
are then read out loud in order of difficulty of operation:
addition, subtraction, multiplication and division. Word
problems are texted and the child is asked to read them
out loud. The questions are each timed with a maximum
of 2 min to ensure uniformity across assessments. Finally,
the child is asked to explain their work to ensure that
they understand and provide another measure of inde-
pendent response.
We complement our data from these phone assess-
ments with data from face- to- face assessments, also
collected in Botswana, but before schools were shut
down for the pandemic. This included two assessments,
both conducted in schools between February and March
of 2020 with 1080 students in grades 3–5. In the first,
classroom teachers evaluated a ‘problem of the day’.
Specifically, one problem would be assigned to the
whole class and students sit on the floor at arms- length
from their classmates to ensure they respond individu-
ally. The teacher reads a problem out loud or puts it on
the board. Students proceed to write the problem and
their response in an individual booklet. When they are
finished, they raise their hand and the teacher collects
the booklet. After class, the teacher flips through all
booklets and marks whether the problem was correct in
a scoring sheet, as well as the type of problem using a
defined scheme (eg, addition, subtraction; one- digit or
two- digit; with or without borrowing and so on) as well as
a subjective rate for the level of difficulty of the problem.
The problems- of- the- day were administered daily for a
period of 15 days, recorded in individual student book-
lets and compiled by student and class. To the same
sample of students, we administered the more compre-
hensive Annual Status of Education Report (ASER) test
of numeracy.16
These data inform our preliminary practical lessons for
future phone- based assessments. Comparing a sample
of phone- based assessments with a more comprehensive
ASER test demonstrates the promise of phone- based
assessments for assessing basic skills. A sample of students
from the phone- based assessment of numeracy reflects a
similar skill level to the sample of students from the more
comprehensive ASER assessment, administered face- to-
face before schools shut down (figure 1).18 Moreover, we
observe that responses cover the entire distribution of
skills from not being able to do any operations to being
able to do division, suggesting that even with simple oper-
ations, we can capture an array of student ability.
PRACTICAL LESSONS FOR PHONE-BASED ASSESSMENTS
Although phone- based assessments are little studied,
oral assessments of learning are commonly used directly
with children and have much to teach about phone-
based assessment. Orally administered tests are effective.
Commonly used early grade assessments of reading and
mathematics (such as ASER, Uwezo, and the Early Grade
Reading and Math Assessments) are administered orally
and have been validated.19–22 Instructions are presented
orally and the response required from the participant is
also oral. Some aspects of these assessments are, there-
fore, suitable for adaptation to phone surveys.
AngristN, etal. BMJ Global Health 2020;5:e003030. doi:10.1136/bmjgh-2020-003030 3
BMJ Global Health
Conducting valid assessments by phone also presents
challenges. We aimed to address some of these chal-
lenges in adapting oral assessments for administration
by telephone, with suggestions drawn from experience
and literature. We developed these lessons through an
iterative process, in which team members shared their
ongoing experiences with phone- based assessments and
previous experience with and literature on oral assess-
ments of learning to identify suggestions that we would
recommend to any team beginning the process of phone-
based assessments.
Protect children
Much has been written about best practice in phone
surveys,23 24 but few phone surveys gather data directly
from children.5 It is vital to adapt face- to- face consent
procedures and enumerator training to make sure that
children and youth are not put in harm’s way in the
process. For example, assessors can ensure that parents
are aware that tests have no direct consequences for
children (ie, these are low- stakes assessments), so that
adults do not discipline children if they overhear low
performance. Supervisors can also monitor a sample of
calls to make sure assessors are interacting appropriately
with children and youth. One way to accomplish this is
to record a random sample of calls and have those auto-
matically sent to supervisors. Furthermore, for assess-
ments with young children, as the phones usually belong
to parents who receive the call, there is almost always
another person besides the child in the household who
is aware of the assessment to provide a layer of account-
ability and oversight. Researchers should adapt general
principles of research with children and youth for phone-
based assessment.25
Test the reliability and validity of your measures
Before rolling out an assessment, it is essential to ensure
that it measures the specific skill that you want to meas-
ure—rather than, for example, general language skills—
and does so reliably.26 27 Fortunately, this can be done at a
fraction of the cost of the overall assessment. The simplest
psychometric assessments examine the internal structure
of the phone- based assessment, to examine the internal
reliability (Cronbach’s Alpha),28 factor structure or item
analysis, for example, using item response theory models.
Such analyses help support the reliability of the tool and
can identify problematic questions. Ideally, phone- based
assessments should be validated against established face-
to- face assessments. Such a test of concurrent validity is
not possible while schools are closed and communities
are locked down.
In Botswana, we approached current validity in a two-
step process. The first step was to simplify an existing oral
assessment in preparation for administration by phone.
In this first stage, we assessed whether the simpler version
of the face- to- face administered assessment was a valid
proxy for the more comprehensive test. Specifically, we
compared the ‘problem- of- the- day’ assessment, which
is easily adapted to a phone- based assessment, with the
more comprehensive ASER assessment.19 We correlated
the difficulty level of the final problem- of- the- day with the
comprehensive assessment taken shortly after and found
a correlation of 69%. We further find a high R- squared
value of 0.74 and an average relationship estimated by
a multivariate regression of 0.70 when we control for
school- level variation (figure 2). If replications demon-
strated this relationship to be stable in the study popu-
lation, then it would represent reasonable concurrent
validity, a first step towards establishing overall construct
validity for the test.29 The second stage of validation will
be to test the concurrent validity of the phone- based
assessments against face- to- face assessments.
3. Keep instructions simple and use practice items to ensure
that respondents understand the exercise
As with other oral assessments, whatever you evaluate by
phone bundles receptive language skills with the skill you
are attempting to test. By phone, in the absence of visual
cues, oral assessments are even more of a test of receptive
language skills like vocabulary, listening and processing
skills. Acknowledging and adjusting for this is particularly
important in settings where different respondents speak
different languages and may comprehend the language
of instruction orally either better or worse than they read
it. Simple instructions and practice items can ensure that
more of the assessment is focused on the target skill.
Data from our first wave of phone- based results reveal
that over 75% of students understood and answered all
the problems, and 24% understood all the problems but
could only answer some. (Whether or not the student
Figure 1 Percentage of students reaching each level of
question difculty (no operations, addition, subtraction,
multiplication and division) in the phone- based sample and
the face- to- face Annual Status of Education Report (ASER)
test of the same content. These graphs are for the same
regions and largely the same set of schools and grades, but
they are not matched to the exact same cohort of students.
They reveal a similar distribution of learning levels using
the phone and face- to- face assessments at the population
level in similar geographies and ages, and they increase our
condence in phone assessment. However, this is not yet
a formal validity assessment. We plan to implement that in
future phone- based assessments.
4AngristN, etal. BMJ Global Health 2020;5:e003030. doi:10.1136/bmjgh-2020-003030
BMJ Global Health
‘understands’ the problem is a subjective judgement
made by the assessor and so is less objective than the rate
of correct answers.) The alignment of skill levels across
the phone- based assessment and the face- to- face assess-
ment (figure 1) suggests that with simple problems, like
the arithmetic operations we are using, phone- based
assessments can accurately capture student ability.
4. Some assessments will be more conducive to phone
assessment than others
The elements of oral tests with minimal visual stimuli
will be easiest to adapt to phone- based testing. For
example, the ‘word problems’ subtest of the Early Grade
Math Assessment involves only oral stimuli, whereas the
‘missing number’ subtest has a grid of numbers that
may be hard to replicate on a phone display. That said,
phone- based assessments can still incorporate text. Asses-
sors could send a text message and ask the respondent
to read the message aloud. In Botswana, we have tried
sending simple texts for students in grades 3 through
5, such as ‘Katlego has 32 apples and organises them by
PLACE VALUE. How many TENS does she have?’ We
asked the child to read the problem out loud (assessing
literacy skills) and then ask the child to solve the problem
(assessing mathematics skills). We send the text message
immediately before the phone call.
5. Keep it short
Home environments, particularly during lockdowns, may
be crowded and noisy and phone calls can be frequently
interrupted. Brief calls and assessments are more effec-
tive than longer calls. General guidance on conducting
phone surveys suggests keeping them to 30 min.23
However, assessments with young people should be
shorter. The Early Grade Reading Assessments (EGRA),
a text- based but orally administered assessment, typically
takes about 15 min per child when administered in full.30
Some evaluations have used a shorter version of EGRA,
focusing on only three subtests.31 32 Calls for phone- based
assessments in Botswana are taking between 15 and
20 min. About 50% of those calls are logistical (sched-
uling, organising the set- up of the house and building
rapport) and the other 50% are dedicated to the assess-
ment. Obviously, the best data on this would be derived
from a series of tests of assessments of different lengths;
in the absence of that, our experience may inform other
teams in designing their assessments.
Although lengthy assessments may not be possible
to conduct via phone, short assessments that are high
frequency, simple and cheap can still be informative and
easily conducted over the phone. With shorter assess-
ments, if teams want estimates with the same level of
statistical precision across the assessed students, then the
sample size will need to rise.33 In face- to- face assessments
in Botswana earlier this year—as described above—we
observed a strong correlation between performance on
the single ‘problem of the day’ and performance on the
class assessment (figure 2). Although this was a face- to-
face assessment, it demonstrates how simple tests like
these, administered by phone, could indicate levels of
learning loss or gain and therefore provide useful infor-
mation for policy- makers and school systems attempting
to mitigate the adverse health and educational effects of
school closures.
6. Experiment with how to get people to pick up the phone
Piloting in Botswana revealed that the combination of a
text message followed by a call yielded the highest pick- up
rates. This is consistent with evidence elsewhere. Sending
a text message to alert respondents to an upcoming call
delivered the best responses in India.34 A programme
in Liberia sent a text message 5 min before the call and
found it helpful to boost answer rates.13 In Botswana, few
people replied to texts alone and about 70% answered
calls alone. Thus, a combination of the two may be most
effective.
7. Establish rapport with adult phone owners and youth
respondents
Respondents—both adults and youth—may be nervous,
particularly during this time of global and local crisis. In
many low- income and middle- income countries, conver-
sations between adults and children are less common, as
are interactions with strangers.35 Questioning oriented
to children in some cultures is predominantly to obtain
information that you are lacking, rather than to test the
knowledge of another person. Rapport, explanation and
examples can all help overcome these barriers. Having
an advance call with an adult responsible for the target
child can increase accuracy, honesty and a willingness to
Figure 2 The relationship between student answers on
“problem of the day” on the last day of class and average
learning levels for the whole class after 15 days. Estimates
were averaged at the class level within a school for a sample
of 40 classes. Each individual student answered a ‘problem
of the day’ in an individual booklet, which was compiled by
the class teacher. If students answered problems correctly,
then they progressed to more difcult items. At the end of
15 days, a more comprehensive multi- item oral assessment
(the Annual State of Education Report, or ASER, assessment)
was administered. In this gure, we compare the nal
problem- of- the- day level of difculty with performance on
the ASER test.
AngristN, etal. BMJ Global Health 2020;5:e003030. doi:10.1136/bmjgh-2020-003030 5
BMJ Global Health
participate (beyond the obvious need for consent). In
some cases, initial assessment instructions can be deliv-
ered through a caregiver with requests to put the child at
ease. It is likely that phone- based tests will be challenging
with children in early grades of primary school or in
preschool.
8. Choose the most cost-effective approach
The full cost of phone calls to over 2250 households was
about US$10 000, including airtime, personnel time,
questionnaire design and piloting. This equates to about
US$4.40 per child. To put this cost in context, the inter-
national assessment Progress in International Reading
Literacy Study, which included Botswana in 2011, has
standard fees for country participation. In Botswana, the
costs are around US$250 000 and about 4000 students
participated, yielding a cost of about US$62.5 per child.
This is likely a lower bound, as country participation fees
likely do not capture all costs. An average of school- based
testing (for students in school) and home- based tracking
(for students out of school), combined with classroom
observations, as part of a randomised controlled trial in
Liberia, cost US$150 per child.36 37
We use direct phone calls by assessors as access to
phones is nearly universal and is a common denomi-
nator that is widely applicable across contexts. Another
potentially lower- cost approach is the use of interactive
voice response (IVR) calls, but they may have context-
specific capability depending on the provider landscape.
In Botswana, IVR infrastructure was not readily avail-
able. Lessons from direct calls reveal that about 50% of
calling time is spent on logistics, including scheduling
with parents and students, rescheduling, setting up at the
household for the assessment and creating a conducive
environment. This might imply that methods such as
IVR, which might be cheaper and more scalable, might
also have lower take- up rate as they might be harder
to schedule reliably and may be less personal. Alterna-
tive methods and their relative cost- effectiveness are an
empirical question for future work.
We hope this piece motivates further creative low-
tech and cost- effective approaches to assessment. In the
longer term, if consistently reliable methods and tools
to measure learning by phone can be developed, they
have the potential to disrupt the way we do measure
learning, by enabling both high- frequency diagnos-
tics and more cost- effective ways to assess learning
outcomes.
9. Account for sample bias
A challenge to conducting phone- based assessments
is that there may be systematic biases in sample selec-
tion. Although access to phones is nearly universal
in Botswana, it is likely that households that do not
respond to phone surveys differ from those that do.
For example, non- responders may lack access to a
phone or may live in a crowded household where it is
difficult to speak quietly on the phone. The problem
of sample bias applies both for validating phone- based
assessments (via face- to- face assessments) and for data
collection. Concurrent validity assessments may be
flawed if a significant proportion of the face- to- face
sample do not respond to phone- based assessments.
The first approach to this problem is to document the
bias. If socioeconomic indices are available for partic-
ipating households, then these data can be used to
understand how responders and non- responders differ.
If bias is a concern, then a sample of non- responders
can be selected for follow- up using different assess-
ment methods (eg, asking a neighbour to lend them
their phone) and the data from this subgroup could be
weighted accordingly in final analyses.38
CONCLUSIONS
Efforts to assess learning by phone are still new and so
should not be used for high- stakes decisions around the
future of individual students. However, understanding
whether distance- learning efforts are leading to
learning and identifying which groups of children are
being most or least disadvantaged by being out of school
should be a central part of the current response as well
as any initiatives to help disadvantaged groups catch up
once schools do reopen. That said, researchers should
always remember that just as face- to- face assessments
have limitations (high cost), so do phone- based assess-
ments (more difficult to assess children with certain
disabilities, like hearing loss). We have proposed an
initial set of practical lessons for phone- based assess-
ments based on the literature and the experience of
piloting a phone- based assessment in Botswana. We are
continuing to learn in our own practice: for example,
we are experimenting with randomising assessors
to avoid any systematic enumerator fixed effects and
with randomising the set of questions posed to each
child to measure both the reliability of constructs and
to back out a measure of sampling error. Our hope is
that future research will use, critique and validate these
lessons and contribute to a communal effort to develop
best practices in this area. Ensuring that children are
learning, even when out of school, is crucial not only to
their education but also to their health outcomes and
the quality of their whole lives.
Twitter Noam Angrist @angrist_noam and David K Evans @DaveEvansPhD
Acknowledgements The authors thank Caton Brewster who is leading data
collection for the digital response in Botswana; Efua Bortsie who provided digital
coordination support; Colin Crossley who codeveloped digital content, colleagues
from Pratham who provided content inspiration and technical assistance, and
Amina Mendez Acosta who provided research assistance. Ṣẹy Abímblá,
Sebastian Fehrler, Harry Fletcher- Wood and Alexis Le Nestour provided helpful
comments on the draft.
Contributors All authors contributed equally to this project.
Funding This study was funded by Elma Philanthropies, Bill & Melinda Gates
Foundation, UBS Optimus Foundation and Vitol Foundation.
Competing interests None declared.
Patient consent for publication Not required.
6AngristN, etal. BMJ Global Health 2020;5:e003030. doi:10.1136/bmjgh-2020-003030
BMJ Global Health
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement The data for this study are described in more detail
in the following study: Angrist N, Bergman P, Brewster C, Matsheng M. Stemming
Learning Loss During the Pandemic: A Rapid Randomized Trial of a Low- Tech
Intervention in Botswana. 2020.
Open access This is an open access article distributed in accordance with the
Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits
others to copy, redistribute, remix, transform and build upon this work for any
purpose, provided the original work is properly cited, a link to the licence is given,
and indication of whether changes were made. See:https:// creativecommons. org/
licenses/ by/ 4. 0/.
ORCID iD
David KEvans http:// orcid. org/ 0000- 0001- 5413- 9284
REFERENCES
1 UNESCO. COVID-19 educational disruption and response, 2020.
Available: https:// en. unesco. org/ covid19/ educationresponse
[Accessed 27 May 2020].
2 Bandiera O, Buehren N, Goldstein MP, etal. The Economic Lives of
Young Women in the Time of Ebola : Lessons from an Empowerment
Program, 2019. Available: http:// documents. worldbank. org/ curated/
en/ 452451551361923106/ The- Economic- Lives- of- Young- Women-
in- the- Time- of- Ebola- Lessons- from- an- Empowerment- Program
[Accessed 4 May 2020].
3 Cutler DM, Lleras- Muney A. Education and health: evaluating
theories and evidence, 2006. Available: http://www. nber. org/ papers/
w12352 [Accessed 27 May 2020].
4 Center for Global Development. CGD - COVID education policy
tracking. Available: http://www. cgdev. org/ covid- education- policy
[Accessed 27 May 2020].
5 Asanov I, Flores F, Mckenzie DJ. Remote- learning, time- use,
and mental health of Ecuadorian high- school Studentsduring the
COVID-19 quarantine, 2020. Available: http:// documents. worldbank.
org/ curated/ en/ 328261589899308503/ Remote- learning- Time- Use-
and- Mental- Health- of- Ecuadorian- High- School- Studentsduring- the-
COVID- 19- Quarantine
6 Le Nestour A, Moscoviz L. Five ndings from a new phone survey in
Senegal, 2020. Available: https://www. cgdev. org/ blog/ ve- ndings-
new- phone- survey- senegal [Accessed 27 May 2020].
7 Quinn DM, Polikoff M. Summer learning loss: what is it, and what
can we do about it? 2017. Available: https://www. brookings. edu/
research/ summer- learning- loss- what- is- it- and- what- can- we- do-
about- it/ [Accessed 27 May 2020].
8 Slade TS, Piper B, Kaunda Z, etal. Is ‘summer’ reading loss
universal? Using ongoing literacy assessment in Malawi to estimate
the loss from grade- transition breaks: Research in Comparative and
International Education, 2017. Available: https:// journals. sagepub.
com/ doi/ 10. 1177/ 1745499917740657 [Accessed 27 May 2020].
9 Zuilkowski SS, Jukes MCH, Dubeck MM. “I failed, no matter how
hard I tried”: A mixed- methods study of the role of achievement
in primary school dropout in rural Kenya. Int J Educ Dev
2016;50:100–7.
10 Busso M, Munoz JC. Pandemic and inequality: how much human
capital is lost when schools close? 2020. Available: https:// blogs.
iadb. org/ ideas- matter/ en/ pandemic- and- inequality- how- much-
human- capital- is- lost- when- schools- close/ [Accessed 27 May
2020].
11 Rapp SR, Legault C, Espeland MA, etal. Validation of a cognitive
assessment battery administered over the telephone. J Am Geriatr
Soc 2012;60:1616–23.
12 Sticht TG, Hofstetter CR, Hofstetter CH. Assessing adult literacy by
telephone. J Lit Res 1996;28:525–59.
13 Lee SH, Nurmatov UB, Nwaru BI, etal. Effectiveness of mHealth
interventions for maternal, newborn and child health in low– and
middle–income countries: systematic review and meta–analysis. J
Glob Health 2016;6.
14 Hazel E, Amouzou A, Park L, etal. Real- Time assessments of
the strength of program implementation for community case
management of childhood illness: validation of a mobile phone-
based method in Malawi. Am J Trop Med Hyg 2015;92:660–5.
15 Romero C, Ventura S, de Bra P. Using mobile and web- based
computerized tests to evaluate university students. Computer
Applications in Engineering Education 2009;17:435–47.
16 Sahin F. Using Mobile Phones for Educational Assessment. In:
Encyclopedia of mobile phone behavior, 2015. www. igi- global. com/
chapter/ using- mobile- phones- for- educational- assessment/ 130132
17 Teachers College, Columbia University Institutional Review Board.
Young 1ove’s activities of learning interventions and assessment (20-
299 protocol, 2020.
18 Angrist N, Bergman P, Brewster C, etal. Stemming learning
loss during the pandemic: a rapid randomized trial of a low- tech
intervention in Botswana, 2020.
19 Vagh SB. Validating the ASER testing tools: comparisons with
reading uency measures and the read India measures, 2012.
Available: https:// img. asercentre. org/ docs/ Aser% 20survey/ Tools%
20validating_ the_ aser_ testing_ tools__ oct_ 2012__ 2. pdf
20 Kenya U. Are our children learning? annual learning assessment
report, 2012. Available: https:// ww. w. twaweza. org/ uploads/ les/
UwezoKE- ALAReport2012. pdf
21 Dubeck MM, Gove A. The early grade reading assessment (EGRA):
its theoretical Foundation, purpose, and limitations. Int J Educ Dev
2015;40:315–22.
22 Reubens A. Early grade mathematics assessment (EGMA): a
conceptual framework based on mathematics skills development in
children, 2009. Available: https:// pdf. usaid. gov/ pdf_ docs/ Pnads439.
pdf [Accessed 6 Jul 2020].
23 Kopper S, Sautmann A. Best practices for conducting phone
surveys. Available: https://www. povertyactionlab. org/ blog/ 3- 20- 20/
best- practices- conducting- phone- surveys
24 Hughes S, Velyvis K. Tips to quickly switch from face- to- face to
home- based telephone interviewing, 2020. Available: https://www.
mathematica. org/ commentary/ tips- to- quickly- switch- from- face-
to- face- to- home- based- telephone- interviewing [Accessed 27 May
2020].
25 Alderson P, Morrow V. The Ethics of Research with Children and
Young People: A Practical Handbook - SAGE Research Methods,
2011. Available: http:// methods. sagepub. com/ book/ ethics- of-
research- with- children- and- young- people [Accessed 27 May 2020].
26 Brennan RL. Educational measurement. 4th Edition. Rowman &
Littleeld Publishers, 2006.
27 Coaley K. An introduction to psychological assessment and
Psychometrics. 2nd edition. SAGE Publishing, 2014.
28 Bland JM, Altman DG. Statistics notes: Cronbach's alpha. BMJ
1997;314:572.
29 Sackett PR, Lievens F, Berry CM, etal. A cautionary note on the
effects of range restriction on predictor intercorrelations. J Appl
Psychol 2007;92:538–44.
30 USAID. Early grade reading assessment, 2011. Available: https://
www. globalreadingnetwork. net/ sites/ default/ les/ eddata/ EGRA_
FAQs_ 25Oct11. pdf
31 Blimpo MP, Evans D, Lahire N. Parental human capital and effective
school management : evidence from The Gambia, 2015. Available:
http:// documents. worldbank. org/ curated/ en/ 923441468191341801/
Parental- human- capital- and- effective- school- management-
evidence- from- The- Gambia [Accessed 13 May 2020].
32 Sabarwal S, Evans D, Marshak A. The permanent input hypothesis :
the case of textbooks and (no) student learning in Sierra Leone,
2014. Available: http:// documents. worldbank. org/ curated/ en/
806291468299200683/ The- permanent- input- hypothesis- the- case-
of- textbooks- and- no- student- learning- in- Sierra- Leone [Accessed 13
May 2020].
33 Beckstead JW. On measurements and their quality: paper 2: random
measurement error and the power of statistical tests. Int J Nurs Stud
2013;50:1416–22.
34 Kasy M, Sautmann A. Adaptive treatment assignment in experiments
for policy choice, 2019. Available: https:// ssrn. com/ abstract=
3434834 [Accessed 27 May 2020].
35 Lancy DF. The anthropology of childhood: Cherubs, Chattel,
Changelings. 2nd ed. Cambridge: Cambridge University Press, 2014.
https://www. cambridge. org/ core/ books/ anthropology- of- childhood/
B34D 307F 8152 7FC3 C91A E9D0 B02D48D7
36 Romero M, Sandefur J, Sandholtz WA. Outsourcing
education: experimental evidence from Liberia. Am Econ Rev
2020;110:364–400.
37 Romero M, Sandefur J. Beyond short- term learning gains: the
impact of outsourcing schools in Liberia after three years, 2019.
Available: https://www. cgdev. org/ publication/ beyond- short- term-
learning- gains- impact- outsourcing- schools- liberia- after- three- years
[Accessed 11 Jun 2020].
38 Baird S, Hamory J, Miguel E. Tracking, attrition and data quality
in the Kenyan life panel survey round 1 (KLPS-1), 2008. Available:
https:// escholarship. org/ uc/ item/ 3cw7p1hx [Accessed 6 Jul 2020].
... While considerable evidence exists on the effects of technology use in educational interventions, there remains much to be learnt about the effects of technology based interventions on educational outcomes during the COVID-19 pandemic. Within this recent literature, Angrist, Bergman, Evans et al. (2020), Angrist, Bergman and Matsheng (2020), Crawfurd, Evans, Hares, Sandefur, et al. (2021), Hassan, Islam, Siddique, and Wang (2021), Lichand and Christen (2021), Radhakrishnan et al. (2021) and Schueler and Rodriguez-Segura (2022) specifically evaluate the effects of low-tech solutions that leverage text messages and direct phone calls and home-based/remote learning on students' learning outcomes. This is the third strand of literature to which our paper contributes. ...
... The improvement in scores is consistent across different specifications, alternative estimation techniques, and sub-samples, thus providing robust evidence that the intervention resulted in better learning outcomes in the treatment group during the time when schools were shut, relative to students in the control group who were not a part of the program. Our paper joins other recent papers in testing the effect of low-cost phone based learning bringing clear evidence to this important topic (Ahluwalia et al., 2023;Angrist, Bergman, Evans et al., 2020;Angrist, Bergman and Matsheng, 2020;Crawfurd et al., 2021;Hassan et al., 2021;Radhakrishnan et al., 2021;. 28 Our results and the improvements in student learning levels ought to be appreciated in the broader context of pandemic-led school shutdowns (which have been the longest in the world) and lock-down movement restrictions (which were among the most severe in the world) in India. ...
Article
program affects learning outcomes of children in under-resourced communities. To overcome limited internet connectivity, the program provides remote instructions via phone calls and simple text messages along with automated voice calls to engage children enrolled in grades one to five in activity-based learning content. This intervention was conducted in three districts in the state of Odisha in India. Using a difference-indifferences framework, we find that the intervention led to a statistically significant improvement in basic number recognition and arithmetic operations, and language learning scores of children by 4.69 percentage points and 5.52 percentage points, respectively. Our results are robust to alternative methods of estimation and application of Lee bounds, thus indicating that well-designed low cost interventions could be a useful supplement for continued learning in the face of sudden shocks in low income countries. With a rise in hybrid format of teaching and learning, such interventions have the capability to cushion the decline in learning levels and provide a safety net in the event of school closures.
... The survey was designed to be relatively short, in order to increase participation rates (Angrist et al., 2020;Busara Centre, 2022). It comprised ten questions for caregivers and nine for learners. ...
Article
The use of mobile phones has been identified as a potential way to bring the benefits of educational technology to a wider audience, including in low-connectivity settings. This is a topic that has received renewed interest recently as a result of the COVID-19 pandemic and school closures. While a number of recent studies have demonstrated good potential for mobile phones and SMS to be used to support learning, there are also questions about how equitable this medium is in practice. We conducted a telephone survey with learners (n = 122) and their caregivers (n = 124) who use M-Shule, an SMS-based educational platform in Kenya, in order to understand their attitudes towards mobile learning, and the benefits and constraints. In particular, we consider whether there are differences in responses according to gender and/or location, to shed light on whether use is equitable. We find that girls and boys face similar barriers to use, and the technology is perceived to be equally beneficial. We identify some areas for potential further support for all learners through mobile learning.
... The way she administered the classroom assessment both in the summative and formative assessment was set in an ideal assessment atmosphere. Consequently, the assessment results could be used as a source of information for real achievement and can be used as a reference for learning improvement [9], [27], [28]. The findings showed that teachers' discrepancies in classroom assessment practices were viewed from their understanding and teaching experiences. ...
Article
Full-text available
A comprehensive understanding of classroom assessment is essential for improving students' learning and teachers' professionalism. This study was conducted to gain better information about teachers' understanding of classroom assessment compared to their classroom practices. Semi-structured interviews and classroom observations were employed to collect the data. The collected data were then analyzed comprehensively using comparative and argumentative methods. The results were then presented descriptively to establish the findings. The findings showed that some teachers' classroom assessment practices were consistent with their assessment understanding, while others were inconsistent. The findings suggest that different contextual factors may influence teachers' classroom practices. Furthermore, English as a foreign language (EFL) teachers need to be retrained on comprehending the influencing contextual factors to utilize their understanding of assessment in the classroom effectively. Keywords: Classroom assessment Classroom practices English as a foreign language teacher Semi-structured interviews Understanding This is an open-access article under the CC BY-SA license.
Article
Guidelines for conducting surveys by mobile phone calls in low- and middle-income countries suggest keeping interviews short (<20 minutes). The evidence supporting this recommendation is scant, even though limiting interview duration might reduce the amount of data generated by such surveys. We recruited nearly 2,500 mobile phone users in Malawi and randomly allocated them to 10-, 20-, or 30-minute phone interviews, all ending with questions on parental survival. Cooperation was high in all groups, and differences in completion rates were minimal. The extent of item nonresponse, age heaping, and temporal displacement of deaths in data on parental survival generally did not vary between study groups, but reports of maternal age at death were more reliable in longer interviews. Recommendations about the duration of mobile phone interviews might be too restrictive. They should not preclude additional modules, including ones on mortality, in mobile phone surveys conducted in LMICs.
Article
Guidelines for conducting surveys by mobile phone calls in low- and middle-income countries suggest keeping interviews short (<20 minutes). The evidence supporting this recommendation is scant, even though limiting interview duration might reduce the amount of data generated by such surveys. We recruited nearly 2,500 mobile phone users in Malawi and randomly allocated them to 10-, 20-, or 30-minute phone interviews, all ending with questions on parental survival. Cooperation was high in all groups, and differences in completion rates were minimal. The extent of item nonresponse, age heaping, and temporal displacement of deaths in data on parental survival generally did not vary between study groups, but reports of maternal age at death were more reliable in longer interviews. Recommendations about the duration of mobile phone interviews might be too restrictive. They should not preclude additional modules, including ones on mortality, in mobile phone surveys conducted in LMICs.
Article
Technology‐based remote research methods are increasingly widespread, including learning assessments in child development and education research. However, little is known about whether technology‐based remote assessments remain as valid and reliable as in‐person assessments. We developed a low‐cost phone‐based language and literacy assessment for primary‐school children in low‐resource communities in rural Côte d'Ivoire using voice calls and SMS. We compared the reliability and validity of this phone‐based assessment to an established in‐person assessment. A total of 685 5th grade children completed language (phonological awareness, vocabulary, language comprehension) and literacy (letter, word, pseudoword, passage reading, and comprehension) tasks in‐person and by phone. Reliability (internal consistency) and predictive validity were high across in‐person and phone‐based tasks. Children's performance across in‐person and phone‐based assessments was moderately to strongly correlated. Phonological awareness and vocabulary skills measured in‐person and by phone significantly predicted in‐person and phone‐based letter, word, and pseudoword reading. Oral language and decoding skills measured in‐person and by phone significantly predicted in‐person and phone‐based passage reading and comprehension. Our phone‐based assessment was a reliable and valid measure of language and reading and feasible for low‐resource settings. Low‐cost technologies offer significant potential to measure children's learning remotely, increasing the inclusion of remote and low‐resource populations in education research.
Article
Education systems regularly face unexpected school closures, whether due to disease outbreaks, natural disasters, or other adverse shocks. In low-income countries where internet access is scarce, distance learning - the most common educational solution - is often passive, via TV or radio, with little opportunity for teacher-student interaction. In this paper we evaluate the effectiveness of live tutoring calls from teachers, designed to supplement radio instruction during the 2020 school closures prompted by the COVID-19 pandemic. We do this with a randomised controlled trial with 4,399 primary school students in Sierra Leone. Tutoring calls led to some limited increase in educational activity, but had no effect on mathematics or language test scores, whether for girls or boys, and whether provided by public or private school teachers. Even having received tutoring calls, one in three children reported not listening to educational radio at all, so limited take-up may partly explain our results.
Article
Covid-19 school closures generated interest in tutoring to make up for lost learning time. Tutoring is backed by rigorous research, but it is unclear whether it can be delivered effectively remotely. We study the effect of teacher-student phone calls in Kenya when schools were closed. Schools (n = 105) were randomly assigned for 3rd, 5th and 6th graders (n = 8,319) to receive one of two versions of a 7-week weekly math intervention—5-minute accountability checks or 15-min mini-tutoring sessions—or to the control group. Although calls increased perceptions that teachers cared, accountability checks had no effect on math performance four months later and tutoring decreased achievement among students who returned to their schools after reopening. This was, in part, because the relatively low-achieving students most likely to benefit from calls were least likely to return and take assessments. Tutoring substituted away from more productive uses of time, at least among returners. Neither intervention affected enrollment. Tutoring remains a valuable tool but to avoid unintended consequences, careful attention should be paid to aligning interventions with best practices and targeting to those who benefit most.
Article
Full-text available
Outsourcing the management of ninety-three randomly-selected government primary schools in Liberia to eight private operators led to learning gains of .18σ after one year, but these effects plateaued in subsequent years (reaching .2σ after three years). Beyond learning gains, the program reduced corporal punishment (by 4.6 percentage points from a base of 51%), but increased dropout (by 3.3 percentage points from a base of 15%) and failed to reduce sexual abuse. Despite facing similar contracts and settings, some providers produced uniformly positive results, while others present trade-offs between learning gains, access to education, child safety, and financial sustainability.
Article
Full-text available
In 2016, the Liberian government delegated management of 93 randomly selected public schools to private providers. Providers received US$50 per pupil, on top of US$50 per pupil annual expenditure in control schools. After one academic year, students in outsourced schools scored 0.18 σ higher in English and mathematics. We do not find heterogeneity in learning gains or enrollment by student characteristics, but there is significant heterogeneity across providers. While outsourcing appears to be a cost-effective way to use new resources to improve test scores, some providers engaged in unforeseen and potentially harmful behavior, complicating any assessment of welfare gains. (JEL H41, I21, I28, O15)
Chapter
Full-text available
Abstract Mobile assessment (m-assessment) is a natural extension of incorporating technology into educational assessment. M-assessment is an emerging field, with foundational studies first published in 2005, but has drawn interest from scholars from all around the world, who have since examined the delivery and effects of m-assessment. Current research encompasses the use of m-assessment in various contexts, including mobile environments, classrooms, work-based settings, informal learning settings, and distance education settings. Current studies also report the effects of m-assessment on student achievement and attitude, as well as highlighting advantages and concerns regarding the administration of m-assessment. The article concludes with a statement of future research imperatives in four areas: extending the purpose of m-assessment, extending the context in which m-assessment can be used, improving delivery of m-assessment, and advancing research to evaluate effects of m-assessment.
Article
Standard experimental designs are geared toward point estimation and hypothesis testing, while bandit algorithms are geared toward in‐sample outcomes. Here, we instead consider treatment assignment in an experiment with several waves for choosing the best among a set of possible policies (treatments) at the end of the experiment. We propose a computationally tractable assignment algorithm that we call “exploration sampling,” where assignment probabilities in each wave are an increasing concave function of the posterior probabilities that each treatment is optimal. We prove an asymptotic optimality result for this algorithm and demonstrate improvements in welfare in calibrated simulations over both non‐adaptive designs and bandit algorithms. An application to selecting between six different recruitment strategies for an agricultural extension service in India demonstrates practical feasibility.