ArticlePDF Available

Advancing theory in the age of artificial intelligence

Wiley
British Journal of Educational Technology
Authors:
  • University of Minnesota
Advancing Theory in the Age of Arficial Intelligence
Shane Dawson ( https://orcid.org/0000-0003-2435-2193 )
Centre for Change and Complexity in Learning,
University of South Australia
Srecko Joksimovic ( https://orcid.org/0000-0001-6999-3547 )
Centre for Change and Complexity in Learning,
University of South Australia
Caitlin Mills ( https://orcid.org/0000-0003-4498-0496 )
Department of Educational Psychology
University of Minnesota
Dragan Gašević ( https://orcid.org/0000-0001-9265-1908 )
Centre for Learning Analytics and Department of Human Centred Computing
Faculty of Information Technology
Monash University
George Siemens ( https://orcid.org/0000-0002-9567-9794 )
Centre for Change and Complexity in Learning,
University of South Australia
Introduction
Arficial Intelligence (AI) is rapidly advancing. Decades of research is now crossing over into
praccal applicaon in society, as evidenced by the rise of generave AI. Educaon has not been
immune to the uptake. For educaon this presents a duality of potenal outcomes. First, AI
brings the potenal to revoluonize teaching methods, assessment and learner engagement.
Second, AI also harbors numerous unancipated longer term impacts on student learning and
the educaon system broadly. For instance, creave thinking and problem solving are crical in
modern complex sengs. When AI begins to serve as an acve partner in sustained social,
creave, and intellectual pursuits, the impacts remain unknown. An over-reliance on AI systems
may result in a decline in many of the traits that make us human. These traits include
self-regulaon, metacognion, goal orientaon, planning, creave brainstorming, and a range
of skills that could be negavely impacted by automaon or machine take over. Effecve
deployment of AI systems in educaon requires theorecal lenses to guide and direct both
research and pracce. In short, theory provides the guard rails to ensure that principles, values,
and trusted constructs guide adopon of AI in educaonal sengs.
The papers in this special secon argue for the cricality of theory in the design, development
and deployment of AI in educaon. In so doing we queson the connued relevance and value
of exisng theories of learning when AI becomes prominent in classrooms. We call for new
frameworks, models and ways of thinking, ones that include the presence of non-human agents
that are more like an acve partner than a simple technology. Theories provide a foundaon for
understanding the complexies of the educaonal process and can inform the design and
development of AI systems that align with educaonal goals and principles. As AI uptake in
educaon increasingly impacts teaching and learning there are quesons on the connued
relevance of exisng learning theories. The integraon of theorecal frameworks into the
development and implementaon of AI-based educaonal systems is essenal for advancing
the field and achieving opmal learning outcomes . Does the adopon of AI in educaon require
modificaons or revisions in how we learn? Or is a complete restructuring required, resulng in
the need for new theories? Importantly, what should theory offer educators when AI is
included? How these quesons are pursued and addressed have important implicaons. If an
exisng theorecal lens, such as cognivism, is used to evaluate the role of AI, theory will need
to guide new cognive tasks and funcons. For example, how do cognive processes such as
coding, memory, and recall, change when declarave knowledge is no longer the primary intent
of educaonal acvies? AI’s ease of access to this type of knowledge makes it less
instruconally important than before. Similarly, do exisng views of social construcvism cover
AI as a social agent in learning? Or do small theorecal lenses, such as community of inquiry,
sufficiently explain the inclusion of AI in classrooms or do they need to be updated or even
enrely rethought?
Theory old and new:
Technology has been heralded as a needed innovaon to improve educaonal pracce. Indeed,
the use and sophiscaon of technologies to support mul-modal learning has advanced
significantly in recent mes, as have insights gained from learning analycs and data science.
Technologies are now highly embedded in educaon and across all levels of schooling, ranging
from early years to higher educaon and professional upskilling. Although there is an
abundance of experimental and exploratory research invesgang the use of these specific
technologies, relavely few studies have leveraged technological advances to directly challenge
or expand on learning and educaon theory. Our understanding of learning theory has
remained relavely constant, despite the rapidly changing educaon context and almost
ubiquitous access to, and availability of, technologies and informaon. This can be seen through
a review of the research literature in the fields of Educaon Technology, Learning Analycs (LA)
or AI in Educaon, where a plethora of works are focused on predicve models, tesng of a
novel technology, or evaluaons of impact. Comparavely, there are far fewer examples of
research interrogang or posing new theories on learning experiences where instructors and
designers integrate and complement the work of arficial agents.
The lack of crical engagement from research in educaon technology, and in parcular LA and
AI, with theory and challenging perspecves may stem from an over reliance on constrained
data sets and tradional research methods (Barmote et al., 2023; Perroa and Selwyn, 2020;
Poquet, et al., 2021). For instance, the majority of LA research findings tend to be derived from
relavely small-scale, or single course studies (Dawson, et al., 2019). As an example in LA,
Dawson and colleagues (2019) draw on Hevner’s research maturity model to demonstrate that
LA has stalled in a cycle of exploratory works. Although such exploratory research is crical for a
field and brings much creavity and rapid trial and tesng, there is also a need to progress
works towards large scale replicaon studies and the establishment of new theory and revision
of exisng theories. However, the praccalies of undertaking AI in educaon and LA research is
funneled more towards the analysis of individual courses in lieu of mul-age and
muldisciplinary data sets derived from tradional methodologies (Arocha, 2021; Dawson, et
al., 2019; Jacobson et al, 2016). The theorecal framing of these works and interpretaon of
findings are oen based on an educaon theory that was conceptualized in a markedly different
era and learning context. Convenonal learning theories were conceived for learning contexts
with limited technology mediaon, let alone the use of advanced technologies and intelligent
agents to the extent that we are seeing today. These technologies have a significant influence
on how courses are designed and delivered alongside instruconal recommendaons. More
crically, they influence the concept of agency and autonomy as recommendaons are oen
made without awareness of the unique developmental needs of each student and without
regard to the longer term impacts of metacognive skills being replaced by automated systems.
Quesons arise regarding the exisng framing of learning theories and whether they remain
applicable in an educaon system that provides for automaon, recommendaons and
intervenon of learning misconcepons or for that maer establishing common standards and
judgment of learning. The lack of alignment between convenonal theories and advanced
technologies mediang new learning experiences points clearly to the need for a revision and
postulaon of new frameworks, views, theories that include these agents as members of the
learning ecosystem.
Papers in this Special Secon:
Papers in this special secon highlight the ways in which technology in general, and AI in
parcular, are transforming learning environments. The papers bring new insights into the
applicaon of AI and educaon technologies to aid learning processes. In so doing, they
collecvely raise quesons about the applicability of tradional learning theories in an
educaon context increasingly powered and influenced by non-human agents.
Jarvela et al (this issue), pose new ways of thinking about the integraon of human and AI
collaboraon in socially shared regulaon (SSRL). The authors explore the intersecon of human
and AI collaboraon in SSRL research. Jarvela and colleagues propose a hybrid model of
human-AI teaming that holds great promise for enhancing learning outcomes. By leveraging the
strengths of both human and AI collaborators, this approach has the potenal to revoluonize
the way we approach teaching and learning, parcularly in the context of online and hybrid
learning environments. The authors provide a comprehensive overview of the theorecal
underpinnings of SSRL and offer praccal suggesons for the design and implementaon of
hybrid human-AI learning systems.
The paper by Saqr (this issue) tackles an exisng challenge for AI and LA research. While the
development of predicve models of academic performance is now a relavely common
pracce, demonstrang improved change remains elusive. As Winne has argued, the predicve
model falsely assumes all students react in similar ways. In this context, Saqr argues that the
convenonal approach of developing predicve models that assume all students respond
similarly is inadequate. Instead, the author proposes incorporang within-person and
between-person variance to establish more accurate and praccal models. By doing so, Saqr's
work provides a promising avenue for improving the effecveness of AI and LA research in
promong student success.
Kio et al’s paper (this issue) tled "Using causal models to bridge the divide between Big Data
and Educaonal Theory" explores the gap between educaonal theory and big data analycs in
educaon. The authors argue that while big data can provide valuable insights into learning
processes, it is limited by its lack of causal reasoning. To bridge this gap, the authors propose
using causal models, which allow for a deeper understanding of the underlying mechanisms
that drive learning. They provide several examples of how causal models can be used to analyze
educaonal data, and argue that such an approach can help to improve the accuracy and
reliability of educaonal analycs.
The paper by McMaster and Kendeou (this issue) starts from an exisng framework that is used
by many schools and school districts to promote evidence-based instruconal pracces -
Mul-Tiered Systems of Support (MTSS). MTSS has the potenal to assist in making precise and
prompt diagnosc and instruconal choices, as well as customizing intervenons for children
who require intensive learning support. To support the effecve and efficient learning in schools
they propose the theory-based integraon of learning analycs and data-driven educaonal
technology into MTSS. The present a use case that demonstrates how (a) MTSS can be used to
guide the use of technology in educaonal processes and (b) how educaonal technology can
be integrated into the MTSS framework.
The paper by Gibson et al. (this issue) synthesizes exisng learning theories and proposes a new
theory to inform the design of AI systems and improve computaonal modeling that can
enhance the role of arficial intelligence (AI) in facilitang learning processes. At the core of the
theory is a causal learning model that is designed to explain how learning occurs across micro,
meso, and macro levels. The authors highlight the “natural” role of learning in elaborated
exploraon and filling of niches in a larger environment via incremental steps and leaps of
progress. The paper emphasizes the importance of big data, compung power, and deep
computaonal learning models in addressing quesons about the roles of AI in society.
Rahm-Skågeby and Rahm (this issue) discuss how AI in educaon can be seen and analysed as
"policies frozen in silicon." The authors suggest that policies exist as both soluons and
representaons of problems. The paper provides a heurisc lens for analyzing and
understanding AI technologies, and how they funcon as proposed soluons to specific
problems based on different ideas about how educaon and learning are enacted. The aim of
this paper is to improve theorecal and analycal approaches in the educaonal system as AI
becomes more prevalent, and to gain insights into how AI will impact how we assess and
measure learning.
In Bearman & Ajjawi (this issue), the authors argue that intenonally framing AI as a ‘black box’
may help students learn to deal with its inherent indeterminacy in an AI-mediated world. The
paper uses relaonal epistemology as a lens to frame AI, highlighng that it should be
understood based on its interacon with humans at a parcular moment in me (as opposed to
how the AI was constructed as a separate enty). The paper gives examples to illustrate their
point, making the case against over focusing on “explainable AI” as a way to understand an
AI-mediated world by arguing that: a) explainability does not necessarily equate to transparency
or understanding, and b) the idea of explainability may require us to assume that such
knowledge is fixed and measurable — both of which the authors suggest might “miss the point”
from a relaonal knowledge view of AI. The paper makes the case for how pedagogy should
focus on what AI and humans do together , including how to orient quality standards within
sociotechnical ensembles, designing rubrics for ambiguity and complexity, and developing
digital literacies for an AI-mediated world.
Hollander et al. (this issue) provide an outlook for how AI and computaonal linguiscs can
guide reading development for diverse populaons of learners. The authors organize a
framework that captures the complexies of reading development including the need to
consider reading as a developmental process that involves a complex set of knowledge, skills,
strategies, and disposions to become “proficient”. The paper provides a helpful literature
review that outlines the state of the literature in terms of foundaonal theory and current
educaonal technologies, and discusses some of the barriers and opportunies that come along
with tradional school organizaon. With future R&D in mind, the authors outline the key
problems, consequences, AI opportunies, and desired outcomes for literacy which will
undoubtedly be a helpful guide as the field progresses in coming years.
Hilbert et al. (this issue) address the need for engagement in STEM learning, with an increased need in
student self regulaon. They build on an extensive history of science of learning research that uses
digital track data to create cognive constructs that provide insight into engagement, social networks,
community, and metacognion. In this paper, they detail the importance of regularity of engagement as
a strong predictor of course outcome and the effects of using a science of learning to learn intervenon
to foster student SRL and ongoing engagement. Their results suggest promising and sustained effects of
this training, raising the need for consideraon of theorecal approaches that integrate behavioral
observaons with cognive constructs in digital educaon.
Beauer et al. (this issue) address a crical area of learning related to feedback. Feedback is central to
guiding student progress and tradional approaches rely primarily on human observaon. With the
development of large language models, and natural language processing in general, new opportunies
exist to offer feedback to learners. They detail how textual arfacts can be enhanced by AI feedback.
They offer a framework to connect feedback processes to adapve student support. As digital learning
grows in importance in educaonal sengs, inclusion of more diverse and mul-modal arfacts will
require a similar updang of theory and constructs to ensure feedback as a driver of overall student
success.
Finally, Giannakos and Cukurova (this issue) explore the potenal of mulmodal learning analycs
to understand collaborave problem-solving processes in educaonal contexts. The authors
highlight the limitaons of relying solely on self-reported data or manual observaon, and argue
that incorporang data from mulple sources (e.g., video recordings, eye-tracking, and
physiological sensors) can provide a more complete picture of how students collaborate and
learn. They also discuss the challenges and opportunies of using machine learning to
automacally extract meaningful paerns from these rich, mulmodal datasets.
CONCLUSION:
The papers in this special secon highlight the transformave potenal of AI in educaon. The
speed and sophiscaon of AI is revoluonizing educaon. This brings numerous noted, yet sll
unrealised, benefits for teaching methods, assessment, and learner engagement. However, the
introducon and speed of deployment of AI in educaon also poses challenges and potenal
negave impacts on student learning and the educaon system as a whole. The deployment of
AI in educaon requires theorecal frameworks to guide and direct both research and pracce.
The lack of crical engagement with theory in educaon technology research will severely
impede progress towards new theories that can more effecvely account for the changes and
complexies of AI integraon into learning experiences. As AI connues to impact teaching and
learning, it raises quesons on the connued relevance of exisng learning theories and the
need for new frameworks, models, and ways of thinking that incorporate non-human agents as
acve partners. Effecve integraon and tesng of new theorecal frameworks into AI-based
educaonal systems is essenal for advancing the field.
References
Arocha, J. F. (2021). Scienfic realism and the issue of variability in behavior. Theory &
Psychology, 31(3), 375-398.
Barmote, K., Howard, S., Gašević, D. (Eds.) (2023). Theory informing and arising from learning
analycs . New York: Springer.
Chen, X., Xie, H., Zou, D., Hwang, G.J. (2020). Applicaon and theory gaps during the rise of
arficial intelligence in educaon. Computers and Educaon: Arficial Intelligence , 1
Dawson, S., Joksimovic, S., Poquet, O., & Siemens, G. (2019). Increasing the impact of learning
analycs. In Proceedings of the 9th internaonal conference on learning analycs & knowledge
(pp. 446-455).
Garrison, D. R. (2016). E-learning in the 21st century: A community of inquiry framework for
research and pracce . Taylor & Francis.
Jacobson, M. J., Kapur, M., & Reimann, P. (2016). Conceptualizing debates in learning and
educaonal research: Toward a complex systems conceptual framework of learning. Educaonal
psychologist, 51(2), 210-218.
Perroa, C., & Selwyn, N. (2020). Deep learning goes to school: Toward a relaonal
understanding of AI in educaon. Learning, Media and Technology , 45(3), 251-269.
Poquet, O., Kio, K., Jovanovic, J., Dawson, S., Siemens, G., & Markauskaite, L. (2021).
Transions through lifelong learning: Implicaons for learning analycs. C omputers and
Educaon: Arficial Intelligence, 2, 100039.
Zawacki-Richter, O. Marín, V.I., Bond, M., Gouverneur, F. (2019). Systemac review of research
on arficial intelligence applicaons in higher educaon – where are the educators?
Internaonal Journal of Educaonal Technology in Higher Educaon , 16 (1),
10.1186/s41239-019-0171-0
... Sea como fuere, esta nueva forma de inteligencia no deja de ser una herramienta más que intenta dar respuesta a necesidades educativas actuales como el fomento de las Competencias Digitales y el desarrollo de la creatividad como habilidad esencial en el siglo XXI. Dawson et al. (2023) consideran que uno de los escenarios importantes que plantea la IA supone la recreación de un panorama educativo totalmente nuevo. Y dentro de este nuevo sistema se ha de focalizar en el pensamiento creativo y la resolución de problemas como factores de alto valor que podrían verse reforzados con el empleo de la IA. ...
Article
Full-text available
Se propone un análisis cienciométrico de la base de datos Web Of Science, que permitirá conocer el estado de la cuestión del binomio Inteligencia Artificial-educación. La metodología tuvo en cuenta cinco etapas: recopilación, extracción, análisis, visualización e interpretación. La muestra ha estado compuesta por todas las producciones científicas desde la primera aparición de la relación de los términos hasta 2022, con un total de 979 documentos. El análisis se centró en la cronología, la cronología por tipo de documento, la producción geográfica, editorial, institucional e idiomática de las indexaciones referidas a artículos, revisiones y actas de congresos además de analizarse el cumplimiento de varias leyes de producción científica: ley de crecimiento exponencial(Price, 1963), ley de productividad de los autores (Lotka,1926) y ley de dispersión(Bardford,1985). De los resultados se destaca que la producción se encuentra en fase exponencial, que existe un alto porcentaje de investigación independiente carente de apoyo institucional o que los registros sobre la temática cumplen con varias leyes de producción científica. Esto permitirá una primera aproximación para la identificación de tendencias y la posterior toma de decisiones fundamentada en datos con el fin de proseguir o descartar la apertura de diversas líneas teóricas y empíricas de investigación.
... En esta situación, hasta donde se ha dicho (Dawson, S., Joksimovic, S., Mills, C., Gašević, D., & Siemens, G., 2023), el mundo de la educación se enfrenta a dos posibles dinámicas con diferentes resultados, según nuestro análisis de hace un año. ...
Preprint
Full-text available
En julio de 2023, ante el auge de los LLM (Large Langauge Models), RED convocó este número especial sobre IA generativa y Educación, donde se prestase especial atención a sus consecuencias para el aprendizaje inteligente y la evaluación educativa. Se quería dar espacio a contribuciones que incluyesen investigación relacionada con estos temas. Y también a experiencias sobre el aprendizaje inteligente y evaluación formativa en contextos ChatGPT.Hoy, un año después, publicamos este número, con estas preguntas de carácter general•¿la IA tiene el potencial de revolucionar los métodos de enseñanza, la evaluación y la ayuda al alumno, existentes?•el pensamiento creativo y la resolución de problemas son fundamentales en entornos modernos y muy complejos ¿Esta IA podría ayudar a los alumnos a enfrentarse a esos problemas?También había otras cuestiones que podían ser planteadas como dudas de sus beneficios que se podrían resumir en ¿Cuáles son los impactos que se producirán cuando la IA generativa comience a servir como un socio activo en acciones sociales, creativas e intelectuales sostenidas, no sólo puntuales o como repuesta a preguntas aisladas? Ahora esos impactos en las prácticas que puedan existir son desconocidos. La otra intención era que, para abordar estas preguntas y en general la necesidad de un despliegue efectivo de los sistemas de IA en la educación, es preciso hacerlo desde un punto de vista teórico, más allá de los resultados que, sobre las interrogantes señaladas, nos proporcione la investigación empírica. Y que no guíe y dirija en nuevas encrucijadas, tanto en la investigación como la práctica. Ese marco teórico, suponíamos entonces, nos proporcionará los asideros y las andaderas para garantizar que los principios, los valores y las construcciones confiables configuren el uso de la IA en la educación.En las conclusiones vemos en qué escasa medida estas expectativas se han cumplido. Como consecuencia, lo que veíamos como una necesidad en esta convocatoria: la importancia crítica de la teoría en el diseño, desarrollo y despliegue de la IA en la educación se ve necesaria ahora más que nunca, pero igualmente desasistidos. En esa perspectiva, nos seguimos planteando de forma crítica la relevancia y la continuidad de las teorías de aprendizaje ya existentes cuando la IA se constituya como una realidad en las aulas. También reiteramos, por incumplida, la llamada a considerar nuevos marcos, modelos y formas de pensar. Nos referimos a aquellos que incluyen la presencia de agentes no humanos, que dudamos en llamar una nueva tecnología, porque se parece más a un socio activo que a una tecnología simple, como sucedía hasta ahora.Ese planteamiento, siguiendo con las primeras conclusiones de insuficiencia en la respuesta, es precisamente lo que nos hace insistir en una serie de preguntas importantes para un futuro, precisamente sobre la revisión de las teorías de aprendizaje basadas en las configuraciones existentes. Y en investigar cuáles serían en este caso sus alternativas.Pero más allá de esas conclusiones generales el número especial ofrece, tras una extensa y exhaustiva difusión en su convocatoria la constancia de una escasa investigación empírica de casos prácticos en la aplicación de la IA generativa en educación.No obstante, del centenar largo de contribuciones recibidas se han seleccionado siete en la revisión editorial previa. El resto se ha descartado por no ajustarse a las normas o no ser el tipo de contribuciones solicitadas (destacan entre ellas, por su alto número, las revisiones de la literatura per se y los self report studies). Y de esas siete han pasado la revisión editorial seis, que se describen al final.La principal aportación de ese reducido número de contribuciones ha sido no solo la constatación de este nivel de investigación y práctica, sino sobre todo las interesantísimas aportaciones de estos seis artículos y de los dos ensayos de los autores invitados.Reclamamos su atención sobre esos artículos y los resultados claros y las evidencias obtenidas sobre el uso concreto de la IA generativa en entornos específicos. Resultados de inevitable uso por las escuelas, las universidades y los profesores en esos entornos o en otros a los que se puedan transferir.
... Consecuencias para el Aprendizaje Inteligente y la Evaluación Educativa". Lourdes Guárdia, Zvi Bekerman y Miguel Zapata-Ros.Página 5 de 20En esta situación, hasta donde se ha dicho(Dawson, S., Joksimovic, S., Mills, C., Gašević, D., & Siemens, G., 2023), el mundo de la educación se enfrenta a dos posibles dinámicas con diferentes resultados, según nuestro análisis de hace un año. ...
Preprint
Full-text available
Resumen En julio de 2023, ante el auge de los LLM (Large Langauge Models), RED convocó este número especial sobre IA generativa y Educación, donde se prestase especial atención a sus consecuencias para el aprendizaje inteligente y la evaluación educativa. Se quería dar espacio a contribuciones que incluyesen investigación relacionada con estos temas. Y también a experiencias sobre el aprendizaje inteligente y evaluación formativa en contextos ChatGPT. Hoy, un año después, publicamos este número, con estas preguntas de carácter general • ¿la IA tiene el potencial de revolucionar los métodos de enseñanza, la evaluación y la ayuda al alumno, existentes? • el pensamiento creativo y la resolución de problemas son fundamentales en entornos modernos y muy complejos ¿Esta IA podría ayudar a los alumnos a enfrentarse a esos problemas? También había otras cuestiones que podían ser planteadas como dudas de sus beneficios que se podrían resumir en ¿Cuáles son los impactos que se producirán cuando la IA generativa comience a servir como un socio activo en acciones sociales, creativas e intelectuales sostenidas, no sólo puntuales o como repuesta a preguntas aisladas? Ahora esos impactos en las prácticas que puedan existir son desconocidos. La otra intención era que, para abordar estas preguntas y en general la necesidad de un despliegue efectivo de los sistemas de IA en la educación, es preciso hacerlo desde un punto de vista teórico, más allá de los resultados que, sobre las interrogantes señaladas, nos proporcione la investigación empírica. Y que no guíe y dirija en nuevas encrucijadas, tanto en la investigación como la práctica. Ese marco teórico, suponíamos entonces, nos proporcionará los asideros y las andaderas para garantizar que los principios, los valores y las construcciones confiables configuren el uso de la IA en la educación. En las conclusiones vemos en qué escasa medida estas expectativas se han cumplido. Como consecuencia, lo que veíamos como una necesidad en esta convocatoria: la importancia crítica de la teoría en el diseño, desarrollo y despliegue de la IA en la educación se ve necesaria ahora más que nunca, pero igualmente desasistidos. En esa perspectiva, nos seguimos planteando de forma crítica la relevancia y la continuidad de las teorías de aprendizaje ya existentes cuando la IA se constituya como una realidad en las aulas.
... Aun así, si optamos por tomar como referencia esta situación, la educación, hasta donde se ha dicho (Dawson et al., 2023), se enfrenta a dos posibles resultados. ...
Article
Full-text available
En el presente artículo se aborda la influencia de la inteligencia artificial (IA) generativa, específicamente el modelo ChatGPT, en el ámbito educativo. Se inicia con la exploración de la naturaleza de los programas generativos, como los modelos Transformer AI, y su impacto en la educación. Se examina la pregunta fundamental sobre qué son estos programas y cómo utilizan estadísticas de grandes conjuntos de datos para generar texto. El artículo destaca tanto las repercusiones positivas como las negativas de la presencia de ChatGPT en la educación. En el ámbito positivo, se destaca su utilidad en la asistencia tutorizada, acceso constante a la información y evaluación sofisticada. Las repercusiones negativas se destacan especialmente cuando se trata del acceso sistemático a información y servicios disponibles las 24 horas del día. Se advierte que esto puede llevar a aprendizajes no deseados o perjudiciales debido a la falta de intervención educativa humana. Una sección importante del artículo se centra en la evaluación formativa y su supuesta inmunidad al ChatGPT. La autenticidad, la confianza y la definición de la evaluación formativa se exploran detalladamente. Se argumenta que la verdadera evaluación educativa de calidad no puede ser reemplazada por la inteligencia artificial, y se discuten los principios de demostración y confianza en este contexto. Finalmente, el artículo propone una reflexión sobre la necesidad de desarrollar una pedagogía específica para la IA generativa, destacando la importancia de una teoría del aprendizaje adaptada a estas tecnologías emergentes. Se explora el concepto de aprendizaje inteligente en este contexto y se sugiere la necesidad de una adaptación constante en la pedagogía para incorporar eficazmente la IA generativa en los procesos educativos.
... Customised learning environments, ranging from fixed to dynamic adaptive scaffolds, enable educators to effectively implement SRL interventions and provide metacognitive support (Azevedo & Hadwin, 2005). AI initiatives are also aimed at enhancing the effectiveness of SRL interventions in educational settings (Dawson, Joksimovic, Mills, Gašević, & Siemens, 2023;Hilpert, Greene, & Bernacki, 2023). For instance, they have been used to measure and augment SRL processes, provide data-driven feedback and action recommendations, and consider moderating factors like gender differences and need satisfaction (Järvelä, Nguyen, & Hadwin, 2023;Jin, Im, Yoo, Roll, & Seo, 2023;Heikkinen, Saqr, Malmberg, & Tedre, 2023;Afzaal et al., 2021;Xia, Chiu, & Chai, 2023). ...
... We urge further research to carry out such an investigation, also focusing on the learning outcomes in combination with, for instance, self-reporting experiences of the social dimension as outlined in the analytical lens of FoSCAI. As suggested by Dawson, Joksimovic, Mills, Gašević, & Siemens, (2023), in the process of advancing theory and practice in the age of AI, FoSCAI seems to be applicable to studies of students' experiences in SDSs in L2 education. In alignment with the results, the framework has been shown to enable an understanding of underlying aspects of students' experiences of the student-CA interaction in an SDS, including the speaking experience with CAs. ...
Article
Full-text available
Conversational artificial intelligence enables opportunities for practicing speaking the target language while giving individualized feedback in a low-anxiety environment offered in spoken dialogue systems with conversational agents. In this paper, we present results from a longitudinal study conducted on Swedish lower-secondary students who used a spoken dialogue system as an integrated part of their ordinary English lessons.They interacted orally with embodied conversational agents to solve given tasks in everyday-life scenarios and self-reported their experiences in questionnaires and systematic logbook reflections. Analytical methods were mainly non-parametric tests. Results revealed that the students sustained practicing, socially and emotionally engaged with a slightly positive trend in their educational experience. These insights can inspire teachers and stakeholders in the integration of conversational artificial intelligence in language education and designers in the development of such systems for this age group.
Preprint
Full-text available
en EdArxiv Resumen En el presente artículo se platea un hecho y las consecuencias que tendrá para la educación y el aprendizaje: Una modalidad de inteligencia artificial (IA) se ha impuesto y avanza rápidamente. Es la IA generativa. Tras décadas de investigación y de desarrollos, ahora se ha decidido por parte de Open AI, y después por otras empresas, que este tipo de IA pase de los laboratorios a la aplicación práctica en la sociedad. La educación no ha sido indiferente a esa aceptación ni a ese despliegue. Y se enfrenta a dos posibles desafíos. El desafío que se deriva de que la IA tengan el potencial de revolucionar los métodos de enseñanza, la evaluación y la ayuda al alumno, existentes. En segundo lugar, el que la IA tenga la capacidad de crear un sistema educativo completamente nuevo. Asu vez estos desafíos presentan interrogantes que aquí se proponen para debatir y profundizar: Cuáles son los impactos que se producen cuando la IA comienza a servir como un socio activo en acciones sociales, creativas e intelectuales sostenidas, no sólo puntuales o como repuesta a preguntas aisladas. Otras son preguntas que se abren sobre cómo del impacto de una dependencia excesiva, en los sistemas de IA en la educación, se podría derivar una disminución de muchos de los rasgos (la autorregulación, la metacognición, el trabajo por objetivos, la planificación, las ideas creativas) que nos hacen humanos. En este caso, que contribuirían a una formación "humana" de los alumnos. Otra cuestión que nos planteamos es cómo evolucionará una tendencia de la IA que hasta ahora se había manifestado como prometedora: El aprendizaje inteligente como forma superior del aprendizaje adaptativo. Por último, se aborda la evaluación educativa y la evaluación formativa. Señalando su particular relevancia en un momento en que modelos tradicionales y a veces únicos de evaluación de aprendizajes como es el llamado "basado en proyectos" queda en entredicho ante la imposibilidad de discernir entre lo elaborado por humanos y lo elaborado por ChatGPT por ejemplo. En caso en que no se integre en un modelo formativo y sólo se utilice para acreditar los logro del aprendizaje textos o ensayos. Palabras clave.-Aprendizaje inteligente, ChatGPT, Evaluación educativa, Evaluación formativa, IA generativa, Principio de demostración License CC-By Attribution-NonCommercial-NoDerivatives 4.0 International
Article
Full-text available
We argue in this paper that there is currently no adequate theoretical framework or model that spans the twelve odd year trajectory from non‐reader to proficient reader, nor addresses fine‐grain skill acquisition, mastery and integration. The target construct itself, reading proficiency, as often operationalized as an endpoint of formal secondary schooling, is defined and measured in imprecise, fragmented terms. Consequently, schools (and empirical research) fall back on heuristics like the Simple View of Reading, or a few stages (learn to read, read to learn, read to do) to describe reading development. Those models, however, are too general to guide instructional decisions for adaptive learning systems. Progress in engineering an adequate learning system has been inhibited by a mismatch with curriculum standards and school organization that impose good‐faith but not fully optimized developmental targets on the educational system. We propose the development of a learning and assessment framework to scaffold reading proficiency development while accounting for the diverse learning trajectories of groups or individuals across development. We then identify some key problems, challenges and opportunities that AI technologies are poised to help us address in conceiving individualized, adaptive learning systems for reading proficiency across the developmental spectrum. We close with a selective review of examples of AI‐enhanced research or products. Practitioner notes What is already known about this topic Reading proficiency is a vital component of education systems design. Theoretical and empirical studies across multiple disciplines have been published, but much of this research is framed in fragmented theories that do not seamlessly span the trajectory of life‐long reading development. Current assessments are time‐consuming, coarse‐grained and fail to provide a roadmap for educators and designers of adaptive learning environments. There is a significant lack of knowledge of potentially non‐linear growth in reading skills within or between years and how to design adaptive instruction for diverse subpopulations. What this paper adds We describe the initial steps towards a literacy learning and assessment framework that spans the trajectory from non‐reader to proficient reader. We provide a landscape of exemplars of artificial intelligence and computational linguistics that reflect the possibilities of a more comprehensive, cohesive literacy development system. We reflect upon key problems, challenges and opportunities that AI technologies can help address in conceiving individualized, adaptive learning systems for reading proficiency across the developmental spectrum. Implications for practice and/or policy AI and computational linguistics can help fill in the gaps in understanding and enacting a longitudinal vision of reading development. Educators would ideally know what to expect of their students at particular points of development, identify deviations and have additional tools to intervene effectively to maximize progress. There is a need to develop adaptive instruction that spans the development of proficiency from preschool to college/career levels and adapts to address common barriers among diverse subpopulations.
Article
Full-text available
An extraordinary amount of data is becoming available in educational settings, collected from a wide range of Educational Technology tools and services. This creates opportunities for using methods from Artificial Intelligence and Learning Analytics (LA) to improve learning and the environments in which it occurs. And yet, analytics results produced using these methods often fail to link to theoretical concepts from the learning sciences, making them difficult for educators to trust, interpret and act upon. At the same time, many of our educational theories are difficult to formalise into testable models that link to educational data. New methodologies are required to formalise the bridge between big data and educational theory. This paper demonstrates how causal modelling can help to close this gap. It introduces the apparatus of causal modelling, and shows how it can be applied to well‐known problems in LA to yield new insights. We conclude with a consideration of what causal modelling adds to the theory‐versus‐data debate in education, and extend an invitation to other investigators to join this exciting programme of research. Practitioner notes What is already known about this topic ‘Correlation does not equal causation’ is a familiar claim in many fields of research but increasingly we see the need for a causal understanding of our educational systems. Big data bring many opportunities for analysis in education, but also a risk that results will fail to replicate in new contexts. Causal inference is a well‐developed approach for extracting causal relationships from data, but is yet to become widely used in the learning sciences. What this paper adds An overview of causal modelling to support educational data scientists interested in adopting this promising approach. A demonstration of how constructing causal models forces us to more explicitly specify the claims of educational theories. An understanding of how we can link educational datasets to theoretical constructs represented as causal models so formulating empirical tests of the educational theories that they represent. Implications for practice and/or policy Causal models can help us to explicitly specify educational theories in a testable format. It is sometimes possible to make causal inferences from educational data if we understand our system well enough to construct a sufficiently explicit theoretical model. Learning Analysts should work to specify more causal models and test their predictions, as this would advance our theoretical understanding of many educational systems.
Article
Full-text available
Artificial intelligence (AI) is increasingly integrating into our society. University education needs to maintain its relevance in an AI‐mediated world, but the higher education sector is only beginning to engage deeply with the implications of AI within society. We define AI according to a relational epistemology, where, in the context of a particular interaction, a computational artefact provides a judgement about an optimal course of action and that this judgement cannot be traced. Therefore, by definition, AI must always act as a ‘black box’. Rather than seeking to explain ‘black boxes’, we argue that a pedagogy for an AI‐mediated world involves learning to work with opaque, partial and ambiguous situations, which reflect the entangled relationships between people and technologies. Such a pedagogy asks learners locate AI as socially bounded, where AI is always understood within the contexts of its use. We outline two particular approaches to achieve this: (a) orienting students to quality standards that surround AIs, what might be called the tacit and explicit ‘rules of the game’; and (b) providing meaningful interactions with AI systems. Practitioner notes What is already known about this topic Artificial intelligence (AI) is conceptualised in many different ways but is rarely defined in the higher education literature. Experts have outlined a range of graduate capabilities for working in a world of AI such as teamwork or ethical thinking. The higher education literature outlines an imperative need to respond to AI, as underlined by recent commentary on ChatGPT. What this paper adds A definition of an AI that is relational: A particular interaction where a computational artefact provides a judgement about an optimal course of action, which cannot be easily traced. Focusing on working with AI black boxes rather than trying to see inside the technology. Describing a pedagogy for an AI‐mediated world that promotes working in complex situations with partial and indeterminate information. Implications for practice and/or policy Focusing on quality standards helps learners understand the social regulating boundaries around AI. Promoting learner interactions with AI as part of a sociotechnical ensemble helps build evaluative judgement in weighting AI's contribution to work. Asking learners to work with AI systems prompts understanding of the evaluative, ethical and practical necessities of working with a black box.
Article
Full-text available
Capturing evidence for dynamic changes in self‐regulated learning (SRL) behaviours resulting from interventions is challenging for researchers. In the current study, we identified students who were likely to do poorly in a biology course and those who were likely to do well. Then, we randomly assigned a portion of the students predicted to perform poorly to a science of learning to learn intervention where they were taught SRL study strategies. Learning outcome and log data (257 K events) were collected from n = 226 students. We used a complex systems framework to model the differences in SRL including the amount, interrelatedness, density and regularity of engagement captured in digital trace data (ie, logs). Differences were compared between students who were predicted to (1) perform poorly (control, n = 48), (2) perform poorly and received intervention (treatment, n = 95) and (3) perform well (not flagged, n = 83). Results indicated that the regularity of students' engagement was predictive of course grade, and that the intervention group exhibited increased regularity in engagement over the control group immediately after the intervention and maintained that increase over the course of the semester. We discuss the implications of these findings in relation to the future of artificial intelligence and potential uses for monitoring student learning in online environments. Practitioner notes What is already known about this topic Self‐regulated learning (SRL) knowledge and skills are strong predictors of postsecondary STEM student success. SRL is a dynamic, temporal process that leads to purposeful student engagement. Methods and metrics for measuring dynamic SRL behaviours in learning contexts are needed. What this paper adds A Markov process for measuring dynamic SRL processes using log data. Evidence that dynamic, interaction‐dominant aspects of SRL predict student achievement. Evidence that SRL processes can be meaningfully impacted through educational intervention. Implications for theory and practice Complexity approaches inform theory and measurement of dynamic SRL processes. Static representations of dynamic SRL processes are promising learning analytics metrics. Engineered features of LMS usage are valuable contributions to AI models.
Article
Full-text available
This paper discusses a three‐level model that synthesizes and unifies existing learning theories to model the roles of artificial intelligence (AI) in promoting learning processes. The model, drawn from developmental psychology, computational biology, instructional design, cognitive science, complexity and sociocultural theory, includes a causal learning mechanism that explains how learning occurs and works across micro, meso and macro levels. The model also explains how information gained through learning is aggregated, or brought together, as well as dissipated, or released and used within and across the levels. Fourteen roles for AI in education are proposed, aligned with the model's features: four roles at the individual or micro level, four roles at the meso level of teams and knowledge communities and six roles at the macro level of cultural historical activity. Implications for research and practice, evaluation criteria and a discussion of limitations are included. Armed with the proposed model, AI developers can focus their work with learning designers, researchers and practitioners to leverage the proposed roles to improve individual learning, team performance and building knowledge communities. Practitioner notes What is already known about this topic Numerous learning theories exist with significant cross‐over of concepts, duplication and redundancy in terms and structure that offer partial explanations of learning. Frameworks concerning learning have been offered from several disciplines such as psychology, biology and computer science but have rarely been integrated or unified. Rethinking learning theory for the age of artificial intelligence (AI) is needed to incorporate computational resources and capabilities into both theory and educational practices. What this paper adds A three‐level theory (ie, micro, meso and macro) of learning that synthesizes and unifies existing theories is proposed to enhance computational modelling and further develop the roles of AI in education. A causal model of learning is defined, drawing from developmental psychology, computational biology, instructional design, cognitive science and sociocultural theory, which explains how learning occurs and works across the levels. The model explains how information gained through learning is aggregated, or brought together, as well as dissipated, or released and used within and across the levels. Fourteen roles for AI in education are aligned with the model's features: four roles at the individual or micro level, four roles at the meso level of teams and knowledge communities and six roles at the macro level of cultural historical activity. Implications for practice and policy Researchers may benefit from referring to the new theory to situate their work as part of a larger context of the evolution and complexity of individual and organizational learning and learning systems. Mechanisms newly discovered and explained by future researchers may be better understood as contributions to a common framework unifying the scientific understanding of learning theory.
Article
Full-text available
Advancements in artificial intelligence are rapidly increasing. The new‐generation large language models, such as ChatGPT and GPT‐4, bear the potential to transform educational approaches, such as peer‐feedback. To investigate peer‐feedback at the intersection of natural language processing (NLP) and educational research, this paper suggests a cross‐disciplinary framework that aims to facilitate the development of NLP‐based adaptive measures for supporting peer‐feedback processes in digital learning environments. To conceptualize this process, we introduce a peer‐feedback process model, which describes learners' activities and textual products. Further, we introduce a terminological and procedural scheme that facilitates systematically deriving measures to foster the peer‐feedback process and how NLP may enhance the adaptivity of such learning support. Building on prior research on education and NLP, we apply this scheme to all learner activities of the peer‐feedback process model to exemplify a range of NLP‐based adaptive support measures. We also discuss the current challenges and suggest directions for future cross‐disciplinary research on the effectiveness and other dimensions of NLP‐based adaptive support for peer‐feedback. Building on our suggested framework, future research and collaborations at the intersection of education and NLP can innovate peer‐feedback in digital learning environments. Practitioner notes What is already known about this topic There is considerable research in educational science on peer‐feedback processes. Natural language processing facilitates the analysis of students' textual data. There is a lack of systematic orientation regarding which NLP techniques can be applied to which data to effectively support the peer‐feedback process. What this paper adds A comprehensive overview model that describes the relevant activities and products in the peer‐feedback process. A terminological and procedural scheme for designing NLP‐based adaptive support measures. An application of this scheme to the peer‐feedback process results in exemplifying the use cases of how NLP may be employed to support each learner activity during peer‐feedback. Implications for practice and/or policy To boost the effectiveness of their peer‐feedback scenarios, instructors and instructional designers should identify relevant leverage points, corresponding support measures, adaptation targets and automation goals based on theory and empirical findings. Management and IT departments of higher education institutions should strive to provide digital tools based on modern NLP models and integrate them into the respective learning management systems; those tools should help in translating the automation goals requested by their instructors into prediction targets, take relevant data as input and allow for evaluating the predictions.
Article
Full-text available
Artificial intelligence (AI) has generated a plethora of new opportunities, potential and challenges for understanding and supporting learning. In this paper, we position human and AI collaboration for socially shared regulation (SSRL) in learning. Particularly, this paper reflects on the intersection of human and AI collaboration in SSRL research, which presents an exciting prospect for advancing our understanding and support of learning regulation. Our aim is to operationalize this human‐AI collaboration by introducing a novel trigger concept and a hybrid human‐AI shared regulation in learning (HASRL) model. Through empirical examples that present AI affordances for SSRL research, we demonstrate how humans and AI can synergistically work together to improve learning regulation. We argue that the integration of human and AI strengths via hybrid intelligence is critical to unlocking a new era in learning sciences research. Our proposed frameworks present an opportunity for empirical evidence and innovative designs that articulate the potential for human‐AI collaboration in facilitating effective SSRL in teaching and learning. Practitioner notes What is already known about this topic For collaborative learning to succeed, socially shared regulation has been acknowledged as a key factor. Artificial intelligence (AI) is a powerful and potentially disruptive technology that can reveal new insights to support learning. It is questionable whether traditional theories of how people learn are useful in the age of AI. What this paper adds Introduces a trigger concept and a hybrid Human‐AI Shared Regulation in Learning (HASRL) model to offer insights into how the human‐AI collaboration could occur to operationalize SSRL research. Demonstrates the potential use of AI to advance research and practice on socially shared regulation of learning. Provides clear suggestions for future human‐AI collaboration in learning and teaching aiming at enhancing human learning and regulatory skills. Implications for practice and/or policy Educational technology developers could utilize our proposed framework to better align technological and theoretical aspects for their design of adaptive support that can facilitate students' socially shared regulation of learning. Researchers and practitioners could benefit from methodological development incorporating human‐AI collaboration for capturing, processing and analysing multimodal data to examine and support learning regulation.
Article
Full-text available
This study presents the outcomes of a semi‐systematic literature review on the role of learning theory in multimodal learning analytics (MMLA) research. Based on previous systematic literature reviews in MMLA and an additional new search, 35 MMLA works were identified that use theory. The results show that MMLA studies do not always discuss their findings within an established theoretical framework. Most of the theory‐driven MMLA studies are positioned in the cognitive and affective domains, and the three most frequently used theories are embodied cognition, cognitive load theory and control–value theory of achievement emotions. Often, the theories are only used to inform the study design, but there is a relationship between the most frequently used theories and the data modalities used to operationalize those theories. Although studies such as these are rare, the findings indicate that MMLA affordances can, indeed, lead to theoretical contributions to learning sciences. In this work, we discuss methods of accelerating theory‐driven MMLA research and how this acceleration can extend or even create new theoretical knowledge. Practitioner notes What is already known about this topic Multimodal learning analytics (MMLA) is an emerging field of research with inherent connections to advanced computational analyses of social phenomena. MMLA can help us monitor learning activity at the micro‐level and model cognitive, affective and social factors associated with learning using data from both physical and digital spaces. MMLA provide new opportunities to support students' learning. What this paper adds Some MMLA works use theory, but, overall, the role of theory is currently limited. The three theories dominating MMLA research are embodied cognition, control–value theory of achievement emotions and cognitive load theory. Most of the theory‐driven MMLA papers use theory ‘as is’ and do not consider the analytical and synthetic role of theory or aim to contribute to it. Implications for practice and/or policy If the ultimate goal of MMLA, and AI in Education in general, research is to understand and support human learning, these studies should be expected to align their findings (or not) with established relevant theories. MMLA research is mature enough to contribute to learning theory, and more research should aim to do so. MMLA researchers and practitioners, including technology designers, developers, educators and policy‐makers, can use this review as an overview of the current state of theory‐driven MMLA.
Article
Full-text available
This paper suggests that artificial intelligence in education (AIEd) can be fruitfully analysed as ‘policies frozen in silicon’. This means that they exist as both materialised and proposed problematisations (problem representations with corresponding solutions). As a theoretical and analytical response, this paper puts forward a heuristic lens that can provide insights into how AI technologies (or advocated AI technologies) function as proposed solutions to certain problematisations based on various imaginaries about how education and learning are best performed or supported. The combined reading of imaginaries and problematisations can thereby aid in our understanding of why and how visions of learning and education are framed in relation to AIEd developments. The overall ambition is to advance theoretical and analytical approaches towards an educational system which is (anticipated as) increasingly permeated by AI systems—systems that also support and implement, more or less, invisible models, standards and assessments of learning, as well as more grand visions of (technology‐augmented) education in society. Practitioner notes What is already known about this topic Artificial intelligence in education (AIEd) is repeatedly presented as a solution for a range of educational ‘problems’. This means that such ‘solutions’ must also frame certain aspects as ‘problems’. Such problems and ‘solutions’ (problematisations) also exist within certain imaginaries of the present times and of the future, where these problematisations are presented as particularly significant and acute, and promoting specific anticipations of learning and ideals of education. What this paper adds An exposition of problematisations in educational settings. An exposition of educational imaginaries. A heuristic lens for understanding the ‘present’ and ‘future’ in a particular imaginary as entangled in, and dependent on, a certain ‘past’. Implications for practice and/or policy The approach presented in this paper provides a heuristic lens for examining how AI technologies (or advocated AI technologies) function as proposed solutions to problematisations based on imaginaries about how education and learning are best performed or supported. This aids our understanding of how and why certain visions of learning and education are framed in relation to AIEd developments (real or imagined). It also advances theoretical and analytical approaches towards an educational system, which is (anticipated as) increasingly permeated by AI systems—systems that also support and implement, more or less, invisible models, standards and assessments of learning, as well as more grand visions of (technology‐augmented) education in society.
Article
Full-text available
Learning analytics is a fast‐growing discipline. Institutions and countries alike are racing to harness the power of using data to support students, teachers and stakeholders. Research in the field has proven that predicting and supporting underachieving students is worthwhile. Nonetheless, challenges remain unresolved, for example, lack of generalizability, portability and failure to advance our understanding of students' behaviour. Recently, interest has grown in modelling individual or within‐person behaviour, that is, understanding the person‐specific changes. This study applies a novel method that combines within‐person with between‐person variance to better understand how changes unfolding at the individual level can explain students' final grades. By modelling the within‐person variance, we directly model where the process takes place, that is the student. Our study finds that combining within‐ and between‐person variance offers a better explanatory power and a better guidance of the variables that could be targeted for intervention at the personal and group levels. Furthermore, using within‐person variance opens the door for person‐specific idiographic models that work on individual student data and offer students support based on their own insights. Practitioner notes What is already known about this topic Predicting students' performance has commonly been implemented using cross‐sectional data at the group level. Predictive models help predict and explain student performance in individual courses but are hard to generalize. Heterogeneity has been a major factor in hindering cross‐course or context generalization. What this paper adds Intra‐individual (within‐person) variations can be modelled using repeated measures data. Hybrid between–within‐person models offer more explanatory and predictive power of students' performance. Intra‐individual variations do not mirror interindividual variations, and thus, generalization is not warranted. Regularity is a robust predictor of student performance at both the individual and the group levels. Implications for practice The study offers a method for teachers to better understand and predict students' performance. The study offers a method of identifying what works on a group or personal level. Intervention at the personal level can be more effective when using within‐person predictors and at the group level when using between‐person predictors.