ChapterPDF Available

Using Mobile Phones for Educational Assessment

Authors:

Abstract

Abstract Mobile assessment (m-assessment) is a natural extension of incorporating technology into educational assessment. M-assessment is an emerging field, with foundational studies first published in 2005, but has drawn interest from scholars from all around the world, who have since examined the delivery and effects of m-assessment. Current research encompasses the use of m-assessment in various contexts, including mobile environments, classrooms, work-based settings, informal learning settings, and distance education settings. Current studies also report the effects of m-assessment on student achievement and attitude, as well as highlighting advantages and concerns regarding the administration of m-assessment. The article concludes with a statement of future research imperatives in four areas: extending the purpose of m-assessment, extending the context in which m-assessment can be used, improving delivery of m-assessment, and advancing research to evaluate effects of m-assessment.
A
117
Copyright © 2015, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Category: Activities and Processes
DOI: 10.4018/978-1-4666-8239-9.ch010
Using Mobile Phones for
Educational Assessment
INTRODUCTION
Educational assessment is defined as collecting
information about the content and depth of stu-
dent knowledge to help teachers, administrators,
policy makers, and the public presumably for the
purpose of enhancing future outcomes (Pellegrino,
2002). Mobile assessment (m-assessment) stands
for assessing learners via mobile devices. For the
purpose of this study, m-assessment refers to as-
sessing learners via mobile phones, i.e., tools that
have calling and texting functions. Mobile phone
applications (applications or apps) stand for small
software programs that can be installed to smart
phones to enhance capabilities of smart phones.
Computerized Adaptive Testing (or computer
adaptive tests) is using computers in administer-
ing tests to tailor the test to the examinee’s trait
or ability level (Chang & Ying, 2007).
OVERVIEW
M-assessment is rooted in incorporating technol-
ogy into educational assessment. The National
Research Council (2001) stated that technological
advances had enormous potential for advancing the
science, design, and use of educational assessment,
especially in classroom assessment context. It was
suggested that influence of technology would
spread beyond classroom tests and high stakes tests
were seemed as influenced. For example, Bennett
(1999) claimed that test design, item generation,
task presentation, scoring, and testing purpose and
location for high stakes testing would be influ-
enced. Incorporating technology into educational
assessment started with implementing computer-
based assessments, then mobile devices such as
Personal Digital Assistants (PDA) and iPads and
iPods were used. Almost all affordances of these
little computers were collected and in one single
piece of equipment: mobile phones. Educational
assessment also benefited from mobile phones.
The earliest studies on m-assessment were
published in mid-2000s. Although m-assessment
is an emerging topic, its emergence attracted
the attention of scholars around the world. To
start with, Dr. McGuire at Anglia Polytechnic
University, United Kingdom is one of the first
scholars published in m-assessment. McGuire
(2005) utilized mobile phones to collect student
feedback via automated mobile phone calls. Dr.
Virvou and Dr. Alepis at University of Piraeus,
Greece are also among the first scholars published
in m-assessment. Virvou and Alepis (2005) as-
sessed students’ writing performance and provide
feedback with mobile phones. Dr. Susono and Dr.
Shimomura (2006) at Mie University, Japan are
also among the pioneers who made use of mobile
phones for presenting in class survey questions
in Quick Response (QR, i.e., visual square code)
format. Following years, m-assessment studies
focused on delivering computer adaptive tests
via mobile phones with Dr. Triantafillou, Dr.
Georgiadou, and Dr. Economides at University
of Macedonia, Greece publishing the first studies
on delivering computer adaptive tests via mobile
devices in 2008 (Triantafillou, Georgiadou, &
Economides, 2008a & 2008b).
As one of the earliest studies, McGuire (2005)
benefited from the calling function of mobile
phones by presenting some questions to students
Fusun Sahin
University at Albany, USA
Using Mobile Phones for Educational Assessment
118
via automated calls. Automated call system was
developed to use outside of classroom for self and
peer assessment, as well as collecting student data
easily and reducing teacher workload. Automated
calls reached out students who were working on
their end of year project and asked them questions
about their progress. McGuire interviewed 25
students benefited from m-assessment and their
teachers to learn about their experiences with the
system. Students narrated that using m-assessment
increased their motivation, facilitated self-directed
learning, and improved student-teacher relation-
ships. Consistent with students’ experiences,
teachers also observed that students’ motivation
and self-esteem increased, students took respon-
sibility for their learning and became independent
learners, and the system improved teacher-student
relationships.
In the same year, Virvou and Alepis (2005)
developed and evaluated an authoring tool by
developing a specific application capable of
automatically scoring student responses (see Wil-
liamson, Mislevy, & Bejar, 2006 for automated
scoring). This application could be used both
inside and outside classroom for self-assessment.
Virvou and Alepis intended to support instruction,
increase student-teacher interaction, and reduc-
ing cost and time for assessment. Ten instructors
and 50 students at high school and college level
were interviewed. Both instructors and students
found using m-assessment useful for their courses.
Students especially appreciated the user friendli-
ness of the authoring tool and found it helpful for
keeping track of their progress and preparing for
the course.
A year later, Susono and Shimomura (2006)
prepared a survey in a QR format enabling students
easily access survey questions via the World Wide
Web. Students could read the survey questions
using their mobile phones, answer questions, and
write some comments. Meanwhile, teachers could
see students’ answers and comment immediately
after students sent their responses and provide
feedback to students. Susono and Shumomura
introduced this m-assessment practice to a class
of students and reported some concerns about
the delivery.
Through 2008 – 2009, scholars from Greece,
Taiwan, Spain and Netherlands published m-
assessment studies regarding computer adaptive
tests. Computer adaptive tests were extensions
of Computer Based Testing ([CBT], see Mills,
Potenza, Fremer, & Ward, 2002), which could
be possible with advancement in measurement
theories (see Hambleton, Swaminathan, & Roger,
1991, for Item Response Theory [IRT]). In com-
puter adaptive tests, the examinee’s responses are
automatically scored after posing a number of
items and new items were given to the examinee
depending on the calculated score. Therefore,
computer adaptive tests have two main advantages:
precision and efficiency. First, computer adaptive
tests can provide more precise results than other
modes of assessment since the tests are tailored
to the examinee’s ability level. Second, computer
adaptive tests can be more efficient than other
modes of tests as they usually require less time
to measure a participant’s ability and for scoring.
Starting from 2008, Triantafillou et al. (2008a)
published the first mobile computer adaptive test-
ing study. Triantafillou and colleagues aimed to
administer tests efficiently and make computer
adaptive testing accessible from anywhere, spe-
cifically both inside and outside classroom with
the help of m-assessment. The evaluation of the
system was done through review of 12 students
who tried out the system by taking a generic
test. Time required finishing the generic test was
recorded and compared with time needed to take
the test in paper-pencil form. Participants also
responded to a seven-item questionnaire about
their experiences. Results indicated that less
time required gathering information about test
takers’ ability by m-assessment because the test
was adaptive. The authors noted that students
who took the test on mobile phones found it to
be interesting and attractive, user-friendly, with
a clear and straightforward interface.
11 more pages are available in the full version of this document, which may
be purchased using the "Add to Cart" button on the product's webpage:
www.igi-global.com/chapter/using-mobile-phones-for-educational-
assessment/130132?camid=4v1
This title is available in InfoSci-Books, Communications, Social Science, and
Healthcare, InfoSci-Media and Communications, InfoSci-Select, InfoSci-
Select, InfoSci-Select. Recommend this product to your librarian:
www.igi-global.com/e-resources/library-recommendation/?id=1
Related Content
The Role of Partnership in E-Government Readiness: The Knowledge Stations (KSs) Initiative in
Jordan
Zaid I. Al-Shqairat and Ikhlas I. Altarawneh (2011). International Journal of Technology and Human
Interaction (pp. 16-34).
www.igi-global.com/article/role-partnership-government-readiness/55456?camid=4v1a
Socioeconomic Reforms, Human Development, and the Millennium Development Goals with
ICTs for Coordination
(2013). ICTs for Health, Education, and Socioeconomic Policies: Regional Cases (pp. 211-229).
www.igi-global.com/chapter/socioeconomic-reforms-human-development-
millennium/74591?camid=4v1a
Referencing in the Virtual World: A Study
G. K. Deshmukh and Sanskrity Joseph (2015). International Journal of Social and Organizational Dynamics
in IT (pp. 12-29).
www.igi-global.com/article/referencing-in-the-virtual-world/155143?camid=4v1a
What Model Best Describes Initial Choices in a Cournot Duopoly Experiment?
Mariano Gabriel Runco (2016). International Journal of Applied Behavioral Economics (pp. 31-45).
www.igi-global.com/article/what-model-best-describes-initial-choices-in-a-cournot-duopoly-
experiment/150493?camid=4v1a
... A logical progression from integrating technology into educational evaluation is a mobile assessment which is a new area with the first foundational research published in 2005. Despite this, it has drawn the interest of experts from all around the world and examined its delivery and effects on student assessment (Sahin, 2015). Mobile assessment has applications in numerous contexts, such as mobile environments, classrooms, workbased settings, informal learning settings, distance education settings, etc. ...
Article
Full-text available
The current system of examination promotes using the same pattern and type of question papers across all discipline. Sometimes, it is quite illogical to have the same pattern of question papers across all disciplines and courses when many of them are quite different from each other.
... In order to develop an effective m-assessment procedure, this new approach of evaluation should also be assessed. Some of the factors that have been studied already are whether the achievement and the attitude of the student are affected while using m-assessment (Sahin, 2015). Nikou and Economides utilized TAM in their study, in an effort to explain if the attitude influences the adoption of m-assessment. ...
Article
Full-text available
This paper focuses on presenting the most important research parameters of m-learning during the last decade, while it also incorporates a novel empirical study in the domain. The utilization of educational data and learning analytics has been taken into consideration and is presented, aiming at ways to improve the human interaction in the digital classroom. Our research findings indicate that the attitude of learners seems to be ideal for modeling students΄ learning styles and preferences. Furthermore, intelligent tutoring systems have had rapid growth, especially in the COVID-19 era, while a significant increase in online courses via social networks has also been noted. This research can be a useful resource for making decisions about the techniques that should be adopted when designing student models for adaptive tutoring systems.
... In order to develop an effective m-assessment procedure, this new approach of evaluation should also be assessed. Some of the factors that have been studied already is whether the achievement and the attitude of the student are affected while using massessment (Sahin, 2015). Nikou and Economides utilized TAM in their study, in an effort to explain if the attitude influences the adoption of m-assessment. ...
Article
The need for developing and implementing new data analysis techniques and methods in the field of Education becomes imperative for the time being. The fields of Data Mining and Knowledge Discovery in Databases provide the technologies that highlight the importance of data in the development of the Educational Sector. In the present study we refer to m-assessment, a field of m-learning, which is used in education and more specifically in the way students are examined. In order to make a more flexible way a student is assessed on the cognitive level, we have created a dynamic questionnaire tailored to the learning needs of each student. With appropriate configuration, we developed an algorithm that meets the conditions: evaluating the potential of a student with fewer questions and a questionnaire dynamically alternating to avoid routine for the examiner. In the backround, the algorithm is supported by a decision tree to provide a structured background. The decision tree with its weights helps us to minimize and optimize categorization and to produce the desired data mining result, helping to develop and adapt the way students are evaluated and educational computer systems.
... Mobile-based assessment is promising in various aspects and researchers examined users' experiences to find whether those promises were actualised. Promises of mobilebased assessment can be classified into two broad categories: providing mobility and practicality to assessment and helping some important educational practices such as supporting instruction; enabling different types of assessment (Sahin, 2015). Research on mobile-based assessment has shown that mobile assessment provided a practical and meaningful assessment experience that could be accessible everywhere (e.g., McGuire, 2005;Virvou and Alepis, 2005). ...
... To the same sample of students, we administered the more comprehensive Annual Status of Education Report (ASER) test of numeracy. 16 These data inform our preliminary practical lessons for future phone-based assessments. Comparing a sample of phone-based assessments with a more comprehensive ASER test demonstrates the promise of phone-based assessments for assessing basic skills. ...
Article
Full-text available
School closures affecting more than 1.5 billion children are designed to prevent the spread of current public health risks from the COVID-19 pandemic, but they simultaneously introduce new short-term and long-term health risks through lost education. Measuring these effects in real time is critical to inform effective public health responses, and remote phone-based approaches are one of the only viable options with extreme social distancing in place. However, both the health and education literature are sparse on guidance for phone-based assessments. In this article, we draw on our pilot testing of phone-based assessments in Botswana, along with the existing literature on oral testing of reading and mathematics, to propose a series of preliminary practical lessons to guide researchers and service providers as they try phone-based learning assessments. We provide preliminary evidence that phone-based assessments can accurately capture basic numeracy skills. We provide guidance to help teams (1) ensure that children are not put at risk, (2) test the reliability and validity of phone-based measures, (3) use simple instructions and practice items to ensure the assessment is focused on the target skill, not general language and test-taking skills, (4) adapt the items from oral assessments that will be most effective in phone-based assessments, (5) keep assessments brief while still gathering meaningful learning data, (6) use effective strategies to encourage respondents to pick up the phone, (7) build rapport with adult caregivers and youth respondents, (8) choose the most cost-effective medium and (9) account for potential bias in samples.
... It can be claimed that limiting the survey questions to computers may have increased the comparability between examinees' learning experiences and their experiences with taking the computer-based test. However, other ICT tools, such as tablets and smart phones, could be included in future versions of these questions because ICT tools other than computers are becoming more popular in teaching and learning (see Şahin, 2015;Şahin & Mentor, 2016). ...
Thesis
Full-text available
Examining the testing processes, as well as the scores, is needed for a complete understanding of validity and fairness of computer-based assessments. Examinees’ rapidguessing and insufficient familiarity with computers have been found to be major issues that weaken the validity arguments of scores. This study has three goals: (a) improving methods to set and evaluate threshold values for detecting rapid-guessing, (b) comparing the response behaviors of examinees with high and low computer familiarity performed, and (c) understanding the potential relationships between computer familiarity, response processes, and scores. Data were drawn from the Programme for International Student Assessment (PISA) 2012. Specifically, the analysis of included Australian examinees’ responses to ten released computer-based mathematics items, log data associated with these questions, and seven questions in the information and communications technology (ICT) questionnaire on using computers in schools for mathematical tasks. Response time based methods suggested by Kong, Wise, and Bhola (2007) were used for setting thresholds for identifying rapid-guessing. Enhanced methods were constructed by incorporating response time and number of response behaviors. Both response time based methods and enhanced methods were evaluated using Wise and Kong’s (2005) criteria as well as number and type of response behaviors. To explore computer familiarity, response behaviors of examinees with low-computer familiarity and high-computer familiarity were compared, and correlations were calculated between computer-familiarity, response behaviors, and response times. Findings indicated that enhanced methods performed superior than the response time based methods on some of the evaluation criteria; examinees with low computer familiarity displayed some response behaviors more frequently than the examinees with high computer familiarity; and no association between computer familiarity, scores, number of response behaviors, and response time were found. This study concluded that using response times and response behaviors were useful for improving identifying rapid-guessing and exploring the potential role of low computer familiarity on examinees’ testing processes and scores.
Chapter
Lernen und Assessment werden als komplementäre Vorgänge beiderseits von neuen Perspektiven der zunehmenden Mobilisierung digitaler Endgeräte begünstigt. Daher sind zahlreiche Möglichkeiten des mobilen Lernens direkt auf Assessmentsituationen übertragbar. Die Besonderheiten des Assessments erlauben jedoch eine unterschiedliche Schwerpunktsetzung bzw. einen anderen Blickwinkel. Mobile Technologien können eingesetzt werden, um sowohl die Reliabilität als auch, und insbesondere die Validität eines Assessments zu überprüfen und ggf. zu verbessern. Dies wird praktisch vor allem dadurch erreicht, dass das Assessment losgelöst von festgelegten Orten und Zeitpunkten stattfinden kann. Auch der umgekehrte Fall ist denkbar: Probandinnen oder Probanden bewegen sich in Raum und Zeit, und erst beim Erreichen festgelegter Bereiche findet das Assessment statt. Komplementiert werden diese Möglichkeiten durch die umfangreiche Sensorik, die mobile Technologie üblicherweise mit sich bringt.
Article
Full-text available
Este artículo presenta una nueva aplicación para móviles diseñada para usar como una herramienta de autoevaluación. Los estudiantes pueden realizar los tests usando sus teléfonos móviles en cualquier lugar y momento que deseen. La herramienta incluye también una web. El impacto sobre los resultados de los estudiantes se ha medido usando una herramienta cuantitativa. Se han seleccionado tres grupos experimentales de estudiantes de edades entre 14 y 15, 17 y 18 y 20 y 21 con el fin de medir el impacto de la experiencia en sus logros académicos.
Article
Full-text available
Because mobile technologies are overtaking personal computers as the primary tools of Internet access, and cloud-based resources are fundamentally transforming the world’s knowledge, new forms of teaching and assessment are required to foster 21st century literacies, including those needed by K–12 teachers. A key feature of mobile technology applications is the integration of cloud-based resources on handheld devices supporting several computing and communication functions. Mobile technologies' unique affordances for teaching and assessment—especially automated, high-resolution, distributed data collection methods and analysis engines—can create unique distributed task environments for learning and assessment. SimSchool is an example of a computer simulation designed for teacher education that utilizes mobile computing affordances. Mobile simulation-based measurement of teacher knowledge and skills implemented in simSchool contains lessons that may be broadly applicable to other interactive and adaptive educational applications.
Article
Full-text available
The aim of this study was to combine podcasts of lectures with mobile assessments (completed via SMS on mobile telephones) to assess the effect on examination performance. Students (n = 100) on a final year, research-led, module were randomly divided into equal sized control and trial groups. The trial group were given access to podcasts / mobile formative assessments for lectures on the module. Towards the end of the module, all students on the module completed a ‘mock’ examination on the material in the lectures. Students in the trial group who listened to podcasts of the lectures and completed mobile assessments (n = 31) performed significantly better in the formative assessment (58.1±1, mean ± S.E.M; P<0.05, Student’s t-test) than other students on the module (52.2 ± 2; n = 54). Students accessed the podcasts via iTunes (or similar software; 38%), from the institutional virtual learning environment (31%), or using a combination of the two (31%). Interestingly, only around 21% of students listened to the majority of their podcasts away from a computer. The results of this study indicate that providing supporting resources does have a positive impact on student performance. Read More: http://journals.heacademy.ac.uk/doi/abs/10.3108/beej.16.1
Article
Full-text available
In this article we share our experiences of a large‐scale five‐year innovative programme to introduce mobile learning into health and social care (H&SC) practice placement learning and assessment that bridges the divide between the university classroom and the practice setting in which these students learn. The outputs are from the Assessment & Learning in Practice Settings (ALPS) Centre for Excellence in Teaching & Learning (CETL), which is working towards a framework of interprofessional assessment of Common Competences in the H&SC professions. The mobile assessment process and tools that have been developed and implemented and the outcomes of the first‐stage evaluation of the mobile assessment tools are discussed from the student perspective.
Article
Full-text available
p>One of the neglected elements when teaching at a distance is establishing what learners already know at the beginning of the course or module. Unlike the face-to-face environment, in distance learning there is no opportunity for administering diagnostic activities just before the onset of instruction. This means that both the weak and more advanced students receive the same level of support since there is no mechanism for differentiating their learning needs. This paper describes the characteristics of a diagnostic test aimed at determining student understanding of the basic calculus concepts of the derivative and the integral, using the mobile phone as the method of delivery. As a proof-of-concept exercise, 10 questions designed to test concept attributes and procedural knowledge involving the two basic calculus concepts were given to a sample of 30 students at the beginning of the course. The implications for curriculum design were then analysed in terms of the didactical functionalities and the communication strategy that could be developed with reference to the mobile phone. </em
Article
This paper describes use of an online Student Response System (SRS) in a pre-qualification course for engineering studies in Norway. The SRS in use, where students answer quizzes using handheld mobile devices like Smart Phones, PADs, iPods etc., has been developed at Sør-Trøndelag University College. The development of the SRS was co-funded by the Lifelong Learning Program KA3-ICT in 2009-2010. SRS has been designed to help teachers effortlessly i) break the monotony of a lecture and allow the students to actively take part in the lecture, ii) increase teacher-student interaction, and iii) give teacher and students immediate anonymous feedback on learning outcome. The response system was used in mathematics in two groups with different lecturers during two semesters in 2009-2010. The pedagogical methods in use will be referred to as "Peer Instruction" and "Classic". In each method the students will answer a multiple choice quiz using their mobile devices. In both cases the result of the quiz will immediately appear as a histogram on a screen in the classroom. The closing parts will also be identical. The lecturer then highlights the correct option in the histogram and explains why this option actually is the correct one. In the Peer Instruction method there will be an additional element. The first poll will be followed by a discussion in student groups, where the students are urged to defend their choice and convince their fellow students that their chosen option is the correct one. The discussion is then followed by a new individual voting session before the final results are shown and the closing part takes place. The paper will compare this method with the peer instruction method as described in existing literature. The learning outcome will be discussed according to interviews with students and the lecturers' experiences from the classroom. In addition we will analyze students' grades and test results in mathematics with respect to their expected level, based on previous achievements. We will present results showing that when students are arguing their point of view, they will have a stronger tendency to convince their fellow students when they themselves already have found the correct option in the quiz. Finally we will suggest pedagogical improvements for future use of response systems in mathematics. Input from lecturers and from students has already been used in the process of developing a new version of SRS, finished in January 2013.
Article
The aim of this study was to evaluate newly developed performance feedback tools from the student perspective. The tools were innovative in both their mode of delivery and the range of stakeholders they involved in the feedback process. By using the tools in health and social care settings, students were able to engage in interprofessional assessment of common competences and obtain performance feedback from a range of stakeholders not commonly involved in work-based learning; these included peers and service users. This paper discusses the ways in which the performance feedback tools were developed by a collaborative programme and compares their delivery, across a wide range of professions and work-based settings, in paper-based, web-based and mobile formats. The tools were evaluated through a series of profession-specific focus groups involving 85 students and 7 professions. The data were analysed thematically and reduced to three key categories: mode of delivery, assessment tool dynamics and work-based issues. These will be discussed in detail. The students agreed that the structured way of capturing and documenting feedback from several sources would support their practice placement learning. The reflective nature of the tools and the capacity for guiding reflection was also welcomed. The concepts of gaining service user, peer and/or interprofessional feedback on performance were new to some professions and evoked questions of reliability and validity, alongside appreciation of the value they added to the assessment process.
Article
Addresses how new technology and advances in cognitive and measurement science can transform large-scale educational assessments, particularly testing for educational admissions. The critical assessment areas discussed are test design, item generation, task presentation, scoring, and testing purpose and location. For each area, the article identifies key assessment innovation, describes why it is important, calls attention to issues it raises, speculates about eventual effects, and identifies trends to look for as the innovation progresses. It is concluded that the transformation in large-scale educational assessment should be driven by cognitive, measurement, and domain-based principles, as well as the technological innovations, required to respond to educational needs effectively. (PsycINFO Database Record (c) 2012 APA, all rights reserved)