Content uploaded by Qianqian Cai
Author content
All content in this area was uploaded by Qianqian Cai on Nov 07, 2023
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=hihc20
International Journal of Human–Computer Interaction
ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/hihc20
Factors Influencing Learner Attitudes Towards
ChatGPT-Assisted Language Learning in Higher
Education
Qianqian Cai, Yupeng Lin & Zhonggen Yu
To cite this article: Qianqian Cai, Yupeng Lin & Zhonggen Yu (15 Oct 2023): Factors Influencing
Learner Attitudes Towards ChatGPT-Assisted Language Learning in Higher Education,
International Journal of Human–Computer Interaction, DOI: 10.1080/10447318.2023.2261725
To link to this article: https://doi.org/10.1080/10447318.2023.2261725
Published online: 15 Oct 2023.
Submit your article to this journal
Article views: 227
View related articles
View Crossmark data
Factors Influencing Learner Attitudes Towards ChatGPT-Assisted Language
Learning in Higher Education
Qianqian Cai , Yupeng Lin , and Zhonggen Yu
Faculty of Foreign Studies, Beijing Language and Culture University, Beijing, China
ABSTRACT
Concerns regarding the potential risks associated with learners’ misusing ChatGPT necessitate an
extensive investigation into learner attitudes towards ChatGPT-assisted language learning. This
study adopts a mixed-method approach, combining structural equation modeling techniques and
interviews. It aims to examine the influencing factors of learner attitudes regarding ChatGPT-
assisted language learning under the extended three-tier technology use model from an interdis-
ciplinary perspective, including the technology acceptance model, etc. The study finds that infor-
mation system quality and hedonic motivation are more significant in contributing to
performance expectancy and perceived satisfaction compared to self-regulation in ChatGPT-
assisted language learning. Behavioral intention is a better predictor of learning effectiveness in
ChatGPT-assisted language learning than perceived satisfaction and performance expectancy. This
research also examines the partial or full mediating effects of behavioral intention and perform-
ance expectancy between other variables. Although this study is limited by some aspects (e.g., the
outdated version of ChatGPT-3 or ChatGPT-3.5), it holds substantial implications for future practice
and research. It appeals to more attention from future developers on hedonic motivation and
information services of ChatGPT and from future researchers on a more comprehensive insight
into influencing factors of learner attitudes towards ChatGPT-assisted language learning.
KEYWORDS
Learner attitudes; ChatGPT;
language learning; factors;
higher education
1. Introduction
Researchers have been paying growing heed to artificial
intelligence (AI) chatbots in teaching and learning (Wu &
Yu, 2023). ChatGPT, an artificial intelligence chatbot devel-
oped by OpenAI, merits more attention from education
researchers (Pavlik, 2023). ChatGPT aims to cultivate inter-
actions and conversations centering on user questions and
application responses, which constitute fundamental activ-
ities in interactive learning (Rospigliosi, 2023). Pask’s (1976)
conversation theory emphasizes the significance of inter-
active dialogue in technology for interactive learning envi-
ronments, leading to a strong belief in the benefits of
artificial intelligence chatbots in educational practice.
However, concerns have emerged regarding the potential
threats and risks associated with learners’ misusing
ChatGPT (Rospigliosi, 2023). Consequently, it becomes cru-
cial to comprehend students’ attitudes towards ChatGPT.
There remains a scarcity of research exploring learner atti-
tudes towards ChatGPT-assisted language learning. The appli-
cation of artificial intelligence chatbots often involves specific
fields, for example, chatbot-assisted language learning (Xia
et al., 2023). However, given the strengths and weaknesses of
ChatGPT, it is undeniable that further investigation is
required to tap into the roles of ChatGPT in interactive learn-
ing environments (Rospigliosi, 2023). Researchers, students,
and instructors have opportunities to explore the learning
effectiveness and utility of software like ChatGPT within lan-
guage learning environments alongside advancements in
information technologies. Research efforts should focus on
harnessing the benefits of ChatGPT to facilitate interactive
language learning more effectively.
A systematic review of previous studies using structural
equation modelling on Chatbot verifies the research gap. On
June 5 2023, we searched the keywords “structural equation
model�” and “Chatbot” by “Topic” (i.e., title, keyword, and
abstract) in the Web of Science Core Collection. We
obtained 53 studies on chatbots using structural equation
modeling. Among them, there were 19 studies on commerce
(e.g., Asante et al., 2023), 14 on the service industry (e.g.,
Balakrishnan et al., 2022), six on psychology, health, and
social interaction (e.g., Svikhnushina & Pu, 2022), five on
tourism (e.g., Pillai & Sivathanu, 2020), and two on medi-
cine (e.g., Li et al., 2022). As regards the learning domain,
there were four studies on non-language specific topics,
such as the technology acceptance of Chatbot (e.g., Almahri
et al., 2020), chatbot use in visual design, chatbot uses for
knowledge sharing and student interest, and one on the
motivation of EFL learners (Ebadi & Amini, 2022).
However, there was no study on the learner attitudes
towards ChatGPT-assisted language learning. The visualiza-
tion analysis of Lin and Yu (2023a) also confirms this gap.
CONTACT Zhonggen Yu 401373742@qq.com. Supervisor in Faculty of International Studies, Beijing Language and Culture University, Beijing, China
This article has been republished with minor changes. These changes do not impact the academic content of the article.
� 2023 Taylor & Francis Group, LLC
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
https://doi.org/10.1080/10447318.2023.2261725
The present study aims to delve into the influencing fac-
tors of learner attitudes towards ChatGPT-assisted language
learning based on the three-tier technology use model to fill
the research gaps. The variables under investigation encom-
pass information system quality, hedonic motivation, self-
regulation, perceived satisfaction, performance expectancy,
behavioral intention, and learning effectiveness. This study
adopts mixed methods that integrate structural equation
modeling and interviews. We employ seven scales to assess
the current status of these variables, and a supplementary
interview question also captures students’ opinions regarding
the strengths, weaknesses, and recommendations about lan-
guage learning assisted by ChatGPT 3.0 or 3.5. Ultimately,
the study establishes a model explaining and predicting
learner attitudes towards language learning using ChatGPT.
2. Literature review
2.1. ChatGPT
ChatGPT relies on a large language model to obtain resour-
ces and information from the internet to provide proper
suggestions and answers. It is an application using a power-
ful software system of machine learning, which is named
Generative Pre-trained Transformer (GPT-3), created by the
organizations of Open Artificial Intelligence (Open AI)
(Rospigliosi, 2023). ChatGPT differed from traditional
search engines in its ability to sustain the conversation
through follow-up questions. This unique feature enabled
tailored responses to address users’ specific challenges,
thereby individualizing answers based on an existing corpus
(appropriability). Appropriability means that learners make
their own with ChatGPT by asking questions in their own
words and personalizing responses to their specific needs.
Moreover, ChatGPT facilitated learners’ self-reflection
(evocativeness) through question-based conversations, which
scaffolded learning, promoted awareness, and stimulated
critical thinking (Harel & Papert, 1990). Evocativeness
depicts the learning events or materials’ ability to arouse
inner thoughts. Furthermore, ChatGPT’s conversational for-
mat allowed for the integration of various meanings and
concepts (integration). Integration represents the learning
materials’ capability to integrate diverse concepts and defini-
tions with previous knowledge frameworks.
AI chatbots (e.g., ChatGPT) have also been employed in
language teaching and learning. ChatGPT is one of the most
developed AI chatbots that can offer language input, instant
feedback, and formative assessments (Huang et al., 2022). It
starts a novel and interesting way of language learning by
mimicking human conversation and interactions (Kohnke
et al., 2023). Specifically, ChatGPT can recognize the words’
meaning based on context, modify and explain language
errors, and generate diverse texts, including advertisements,
emails, etc. It can also provide definitions and examples of
the dictionary, make annotations of texts, and formulate
quizzes. Moreover, learners can use ChatGPT to make notes,
rewrite learning materials, and request explanations in both
their primary and second language (Kohnke et al., 2023).
2.2. Learner attitudes towards ChatGPT-assisted
language learning based on the three-tier technology
use model
Personal attitudes played a pivotal role in influencing the
utilization of information technology (Liaw, 2008).
Understanding learner attitudes towards ChatGPT was cru-
cial for fostering effective learning environments. Liaw
(2008) formulated the three-tier technology use model from
an interdisciplinary perspective to illustrate user attitudes
towards information technology. The model comprised three
layers pertaining to personal attitudes towards information
technology. These layers included the personal elements
(e.g., self-regulation and motivation) and information system
quality (environmental factors) layer, the affective and cog-
nitive layer (e.g., perceived satisfaction and performance
expectancy), and the behavioral intention layer (Liaw, 2008).
Notably, Liaw and Huang (2016) discovered that students’
behavioral intention positively impacted their learning
effectiveness when utilizing e-books. Hence, when exploring
the learner attitudes towards ChatGPT-assisted language
learning, we also consider the layer of learning effectiveness
(Figure 1).
2.3. Self-regulation
Self-regulation was an individual characteristic that was crucial
for students’ learning outcomes. Self-regulation, also called
self-regulated learning, in this study means the process by
which language learners activate and maintain ChatGPT-ori-
ented behaviors, cognitions, and affects (Zimmerman &
Schunk, 2011). Self-regulated language learning was essential
for L2 learners who wished to improve their language profi-
ciency (Kohnke, 2023). Like other artificial intelligence tech-
nologies that engaged with students, chatbots could allow
learners to self-regulate their language learning process and
interaction with chatbots (Xia et al., 2023). Individual differen-
ces, such as self-regulation, influenced the interaction between
technology and humans, which was highlighted by Sharples
et al.’s (2005) activity theory (Liaw & Huang, 2016). By pro-
viding feedback, ChatGPT empowered self-regulated language
learning (Kohnke, 2023) and fostered personalized learning
Figure 1. The extended Three-Tier Technology Use Model (3-TUM) by the author. Note. Adapted from “Investigating students’ perceived satisfaction, behavioral
intention, and effectiveness of e-learning: A case study of the Blackboard system,” by Liaw (2008). Copyright 2023 by the Elsevier and Copyright Clearance Center.
2 Q. CAI ET AL.
for many students without limits in time and location (Wu &
Yu, 2023).
2.4. Hedonic motivation
Hedonic motivation was a significant personal characteristic
influencing human behavior and experiences. Motivation
represents an individual’s drive towards spontaneous actions.
Hedonic motivation in this study, by definition, is the fun,
enjoyment, or pleasure perceived from using ChatGPT for
language learning. ChatGPT has been shown to elicit a sense
of fun and pleasure among L2 learners during their inter-
action with ChatGPT (Kohnke, 2023). Moreover, Hsu et al.
(2023) demonstrated that the incorporation of expert deci-
sion-making mechanisms into AI chatbots enhanced stu-
dents’ enjoyment of the learning process. Compared with
traditional learning forms, learning through AI chatbots
resulted in higher levels of intrinsic motivation and per-
ceived enjoyment among students (Yin et al., 2021). The
hedonic motivation was crucial for ChatGPT-assisted lan-
guage learning.
2.5. Information system quality
The information system quality of AI chatbots, such as
ChatGPT, encompassed system and information quality in
the present study. We define information system quality as
the accuracy and efficiency of information and content gen-
erated by ChatGPT (DeLone & McLean, 1992). The infor-
mation system quality in this study is regarded as the extent
to which the information yielded by ChatGPT expresses the
expected meaning (Wut & Lee, 2022). The information sys-
tem quality of ChatGPT involves the quality of feedback and
response measured by their accuracy (Liaw, 2008; Tlili et al.,
2023). The response quality of ChatGPT includes the quality
of the dialogue and the accuracy of the results generated
within the interaction (Tlili et al., 2023). Additionally, the
efficiency of ChatGPT involves internet connectivity and
linking speed (Liaw, 2008). The information system quality
was pivotal in successful and effective learning using AI
chatbots.
2.6. Performance expectancy
In the study, performance expectancy refers to students’ per-
ceptions of how AI chatbots help attain goals in their learn-
ing performance. The performance expectancy of ChatGPT
was theoretically related to students’ perception of the infor-
mation generated by ChatGPT as valuable and beneficial
(Tlili et al., 2023). Students tended to perceive the usefulness
of a learning tool when they felt free to learn at their own
pace, access flexible options (Yin et al., 2021), and receive
individualized support (Lee et al., 2021). Further, the educa-
tional information quality of the system contributed to its
usefulness in disciplinary learning (Wut & Lee, 2022).
Students’ perceptions of fun or enjoyment were also relevant
to their perceived assistance and utility of the information
system (Al-Sharafi et al., 2022). Moreover, perceived
enjoyment and information system quality were crucial
determinants of performance expectancy in e-learning sys-
tems (Al-Fraihat et al., 2020; Salloum et al., 2019).
Therefore, we hypothesized as follows.
H1: Self-regulation positively and significantly predicts
performance expectancy in ChatGPT-assisted language learning
in higher education.
H2: Hedonic motivation positively and significantly predicts
performance expectancy in ChatGPT-assisted language learning
in higher education.
H3: Information system quality positively and significantly
predicts performance expectancy in ChatGPT-assisted language
learning in higher education.
2.7. Perceived satisfaction
Satisfaction played a crucial role in the learners’ experiences
with information technology. This study depicts perceived
satisfaction as people’s affective responses derived from their
language learning experience in ChatGPT. Perceived enjoy-
ment and usefulness were important antecedents of satisfac-
tion in digital textbook learning (Joo et al., 2017).
Specifically, the mobile-learning design with interesting and
enjoyable content of mobile learning systems could deliver
greater satisfaction (Al-Sharafi et al., 2022). Chao (2019) tes-
tified to the effect of learners’ perceptions of usefulness on
satisfaction in mobile learning. Learners tended to be
satisfied with chatbots when they met their learning needs
(Al-Sharafi et al., 2022). Other studies also unearthed the
positive effect of perceived usefulness on m-learning satisfac-
tion (Al-Emran et al., 2020; Kim-Soon et al., 2017). Further,
the quality of the information system was closely associated
with learners’ perception of satisfaction (Ashfaq et al., 2020;
Wut & Lee, 2022). Moreover, Liaw and Huang (2016)
revealed that self-regulation positively predicted the
satisfaction of learning through e-books. To explore these
predicting relationships between the above variables in
ChatGPT-assisted language learning contexts, we intended
to test the following hypotheses:
H4: Self-regulation positively and significantly predicts perceived
satisfaction in ChatGPT-assisted language learning in higher
education.
H5: Hedonic motivation positively and significantly predicts
perceived satisfaction in ChatGPT-assisted language learning in
higher education.
H6: Information system quality positively and significantly
predicts perceived satisfaction in ChatGPT-assisted language
learning in higher education.
H7: Performance expectancy positively and significantly predicts
perceived satisfaction in ChatGPT-assisted language learning in
higher education.
2.8. Behavioral intention
Behavioral intention involves users’ intention to adopt and
use a specific technology tool. This study regards behavioral
intention as how a learner intends to adopt ChatGPT as a
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 3
language learning tool. Learners’ perception of usefulness,
also known as performance expectancy, was positively asso-
ciated with their adoption intentions of MOOCs (Ma & Lee,
2019) and their attitudes towards these technology tools
(Malik et al., 2021). Foroughi et al. (2023) also found the
influence of performance expectancy on students’ intention
to use ChatGPT in learning general courses. Furthermore,
learners’ satisfaction with intelligent chatbots could stimulate
their continuing intention to leverage these conversational
chatbots to enhance learning activities (Al-Sharafi et al.,
2022). However, Al-Emran et al. (2020) asserted that the
predictive effects of performance expectancy and perceived
satisfaction on the continual intention of m-learning were
insignificant. In view of the inconsistent findings in previous
research, we formulated the research hypotheses as follows.
H8: Performance expectancy positively and significantly predicts
behavioral intention in ChatGPT-assisted language learning in
higher education.
H9: Perceived satisfaction positively and significantly predicts
behavioral intention in ChatGPT-assisted language learning in
higher education.
2.9. Learning effectiveness
Learning effectiveness was an essential element of learner
experience that merited the attention of education research-
ers. Learning effectiveness in the study is defined as a holistic
construct that includes increasing levels of learning/ subject
mastery in ChatGPT-assisted language learning: memoriza-
tion, comprehension, application, analysis, evaluation, and
creation (Chen et al., 2023). Chatbot-led learning was spot-
ted to prompt micro-learning effectiveness (Chen et al.,
2023) and facilitate the acquisition and retention of informa-
tion. Learners’ perceptions of the utility and assistance of
the information system were closely related to the learner
experience, which extended beyond satisfaction and effect-
iveness alone (Tlili et al., 2023). Moreover, Liaw and Huang
(2016) revealed a positive link between students’ behavioral
intention, perceived satisfaction, and the learning effective-
ness using e-books. Based on these findings, we would test
the following hypotheses, and the hypothesized research
model is displayed in Figure 2.
H10: Performance expectancy positively and significantly
predicts learning effectiveness in ChatGPT-assisted language
learning in higher education.
H11: Perceived satisfaction positively and significantly predicts
learning effectiveness in ChatGPT-assisted language learning in
higher education.
H12: Behavioral intention positively and significantly predicts
learning effectiveness in ChatGPT-assisted language learning in
higher education.
3. Methods
This study adopted the mixed method, with structural equa-
tion modeling techniques as the principal approaches sup-
plemented by interviews. The extended three-tier technology
use model integrated constructs of four layers: personal ele-
ments, information system quality, affective and cognitive
components, behavioral intention, and learning effectiveness.
The relationships among variables in the theoretical frame-
work were constructed based on the four layers following
Grant and Osanloo’s guideline (2014), which could be tested
by the quantitative approaches in this study. Moreover, a
supplementary interview question at the end of the quantita-
tive instruments was designed to further develop the
extended three-tier technology use model and understand
ChatGPT-assisted language learning.
3.1. Instrument adaptation and design
We adapted the existing questionnaires and designed the
entire measurement. The self-regulation scale (four items),
information system quality scale (five items), perceived satis-
faction scale (four items), behavioral intention scale (four
items), and learning effectiveness scale (five items) were
adapted from previous scales (Liaw, 2008; Liaw & Huang,
2016). The performance expectancy scale (four items) and
hedonic motivation scale (four items) were revised from
Tseng et al.’s measurements (2019). The entire measurement
Figure 2. The hypothesized model of learner attitudes towards ChatGPT-assisted language learning.
4 Q. CAI ET AL.
(see Appendix) was in a five-point Likert, and it included 37
items: The first part included six questions concerning the
informed consent, participants’ sex, age, education levels,
major, and frequency of using ChatGPT. The second part
represented seven subscales of the seven variables. The third
part embodied an open-ended interview question to gain
insights into the strengths, weaknesses, and recommenda-
tions for ChatGPT-assisted language learning.
We modified each subscale’s context and content accord-
ing to Stewart et al.’s (2012) framework for understanding
modifications to measures for diverse populations so that
the questionnaire could be context-specific to ChatGPT-
assisted learning. One item of self-regulation was dropped
since it was inappropriate for the learning process in
ChatGPT. Items of information system quality, perceived
satisfaction, performance expectancy, and behavioral inten-
tion were adapted through selection, modification, or
replacement to better suit the learning interface and charac-
teristics in ChatGPT. One item was added to the hedonic
motivation scale since the interesting experience was univer-
sal and contributed a great deal of hedonic motivation.
3.2. Research procedures
We revised existing measures and formulated the entire
scale following strict patterns. We used the website of
Questionnaire Star (http://www.wjx.cn/) to pose the ques-
tionnaire and transform it into a quick response code (QR
code) and hyperlink. We adopted the rapid, convenient sam-
pling method, a commonly used data collection method for
structural equation modeling (e.g., Lin & Yu, 2023b). Before
data collection, we calculated the minimum sample size
based on the N:q rule of Kline (2023) for structural equation
modeling, i.e., the recommended sample-size-to-parameters
ratio of 20:1. Therefore, the minimum sample size of this
research was 140 for seven proposed variables. After that,
we distributed the questionnaire to the participants via social
media, including WeChat and Tencent QQ, which mirrored
the existing research (Lin & Yu, 2023b). Participants
received instructions about ChatGPT usage and functions in
advance and were advised to rate each item based on
instinct. We collected the data adhering to the ethical guide-
lines authorized by our university’s Academic Committee.
Prior to answering the questionnaires, we obtained informed
consent from all participants. We assured and declared that
participants have the right of withdrawal, confidentiality,
anonymity, and protection of their data.
We downloaded the datasheet from the website of
Questionnaire Star for further analysis. We saved the data,
including the demographic information, subscales, and an
open-ended question, in an XLS file for normality tests in
IBM SPSS 23.0. We also saved the data in the CSV format,
including only student IDs and questionnaire items, for
Partial Least Squares Structural Equation Modeling (PLS-
SEM) analysis. We assessed the reliability and validity of the
entire scale and tested the hypotheses in SmartPLS 3.2.9
immediately after the normality test. Finally, we conducted a
word frequency analysis of the supplementary interview
question. Answers with little information, including “no”
and blank records, were removed. After that, we adopted
the word frequency analysis method to refine students’ per-
ception of the strengths, weaknesses, and recommendations
of ChatGPT-assisted language learning.
The word list, keywords in the context (KWIC), and
word cloud tools embedded in the AntConc 4.1.0 were used
to calculate the word frequency and yield the word cloud,
which simulated previous research (e.g., Lin & Yu, 2023b).
Word frequency analysis was selected because it is a statis-
tical analysis technique that helps identify the most fre-
quently recurring words and illuminate the core contents
and themes of a text corpus. To further develop the
extended three-tier technology use model and interpret the
disadvantages, advantages, and suggestions of ChatGPT-
assisted language learning, it is also necessary to yield the
frequently mentioned terms to understand the core topics in
a large amount of text.
3.3. Statistical approaches
First, we tested the normality of distribution by calculating
the skewness and kurtosis for every item. Kline (2015)
revealed that the normality test was the basis of further stat-
istical analysis, such as structural equation modeling. The
dataset is regarded as highly skewed when skewness’ abso-
lute value exceeds 3000 or kurtosis’ absolute value surpasses
10,000; otherwise, the dataset is considered normally distrib-
uted within an acceptable range (Kline, 2015). We used the
“Analyze-Descriptive Statistics-Explore” function in SPSS
23.0 to calculate the distribution of all scale items. After the
assumption test of normal distribution, we evaluated the
outer (measurement) and inner (structural) models. Hair
and Alamer (2022) comprehensively explained the research
process involving PLS-SEM in the context of second lan-
guage studies. The analysis procedures involved several tests,
including the Partial Least Squares Algorithm,
Bootstrapping, and the PLS-predict algorithm tests. These
tests were conducted to assess the reliability and validity of
the model, test the research hypotheses with effect sizes, test
the mediating effects of mediators, and evaluate the predict-
ive capability (Q
2
) and explanatory power (R
2
) of the
research model (out-of-sample prediction performance).
We then adopted the word list, keywords in the context
(KWIC), and word cloud tools embedded in the AntConc
4.1.0 to analyze students’ opinions on the advantages, disad-
vantages, and recommendations of ChatGPT-assisted lan-
guage learning. Before the word frequency analysis, we
translated Chinese into English and pre-processed the text
by deleting words without contributions to the semantics,
such as “the, it, and, of, be, can, to, is.” Then, we yielded
the word list and a word cloud map to help initially locate
the frequently emerged terms. After that, we executed the
keywords in the context tool to search the words with the
same meanings by typing the main form of an utterance
plus a “�” to spot the frequencies of all terms and different
word classes relevant to the main topic, such as nouns,
verbs, and adjectives.
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 5
4. Results
4.1. Descriptive statistic results and normality tests
From March 19 2023 to May 29 2023, we obtained valid
responses from 509 participants. The sample size of 509
exceeded the minimum sample size of 140, and it was
almost good because 500 was viewed as a very good popula-
tion (Lin & Yu, 2023b). Fifty-one responses were excluded
due to “No” to the informed consent question, “Never” to
the frequency of ChatGPT use question, and casual attitudes
to the survey items, leaving 458 participants who consented
to join the investigation and used the ChatGPT. Table 1 dis-
plays the informed consent and participants’ demographic
characteristics. Given our interpersonal relationships and
research fields, we found that most of the participants were
from social sciences and engineering. Only a small percent-
age of participants belonged to science, agriculture, and
medicine, though we collected data randomly across China.
Participants were predominantly female, which reflected a
common atmosphere among social science subjects across
China. Moreover, fewer master’s students, Ph.D. students,
and others (e.g., teachers and workers) participated in the
investigation than undergraduates. The results might be
attributed to the limited size of the participants during ran-
dom sampling. Furthermore, the 20–22 years participants
responded to the survey more proactively than the other
ranges. It might be because students 20–22 years old were
more interested in evaluating their use of ChatGPT for lan-
guage learning, and they might have easier access to
ChatGPT.
Further, we implemented the normality tests for the 30
items measuring seven variables in SPSS 23.0. As a result,
the values of all items’ skewness fell within the scope of
−1.45 to −0.81, and the values of all items’ kurtosis ranged
from 1.37 to 5.86. Namely, the skewness’s absolute value of
each item was lower than 3.00, and the kurtosis’s absolute
value of every item was under 10.00. It meant that the skew-
ness and Kurtosis of the whole data pool were acceptable.
The 30 items of the seven variables were normally distrib-
uted so we could further analyze and assess the structural
equation model.
4.2. Structural equation model assessment
4.2.1. The outer (measurement) model
The evaluation of the outer model involved four criteria: (1)
loadings >0.7, (2) internal consistency reliability
(Cronbach’s alpha, Composite reliability >0.7), (3) conver-
gent validity (Average Variance Extracted (AVE) >0.5),
and (4) discriminant validity (Hair & Alamer, 2022). The
results indicated that all these criteria were met. Table 2 dis-
played that the loadings were all above 0.70, which were
statistically significant at the significance level of 0.01
(Henseler et al., 2015). Therefore, the first criteria were
already met. The values of Composite reliability and
Cronbach’s alpha for all variables fell within the range of
0.70 to 0.95 (see Table 2), indicating satisfactory internal
Table 1. The demographic characteristics of participants.
Item Genre Frequency Proportion (%)
Informed consent, ChatGPT use, and careful attitudes Yes 458 89.98
No 51 10.02
Valid responses (N¼458)
Sex Female 311 67.90
Male 147 32.10
Age Under 19 years old 108 23.58
20–22 years old 213 46.51
23–25 years old 77 16.81
Over 26 years old 60 13.10
Educational levels Undergraduates 327 71.40
Master’s students 88 19.21
Ph.D. students 30 6.55
Others 13 2.84
Major Science 29 6.33
Engineering 100 21.83
Agriculture 9 1.97
Medicine 7 1.53
Arts and humanities 313 68.34
Table 2. The assessment of the outer model.
Latent variables Cronbach’s aCR AVE Items Loadings
Behavioral intention 0.91 0.94 0.79 BI1 0.89��
BI2 0.89��
BI3 0.87��
BI4 0.89��
Hedonic motivation 0.92 0.94 0.81 HM1 0.89��
HM2 0.92��
HM3 0.92��
HM4 0.86��
Learning effectiveness 0.92 0.94 0.76 LE1 0.88��
LE2 0.87��
LE3 0.87��
LE4 0.90��
LE5 0.85��
Perceived satisfaction 0.89 0.92 0.75 PS1 0.84��
PS2 0.91��
PS3 0.86��
PS4 0.86��
Performance expectancy 0.91 0.94 0.79 PE1 0.90��
PE2 0.90��
PE3 0.87��
PE4 0.89��
Self-regulation 0.88 0.92 0.73 SR1 0.87��
SR2 0.87��
SR3 0.84��
SR4 0.83��
Information system quality 0.90 0.92 0.71 ISQ1 0.77��
ISQ2 0.87��
ISQ3 0.85��
ISQ4 0.85��
ISQ5 0.87��
Note. ��p<0.01. CR: composite reliability; AVE: average variance extracted.
6 Q. CAI ET AL.
consistency reliability. Moreover, the values of average vari-
ance extracted (AVE) for all concepts exceeded the conser-
vative threshold of 0.50 (see Table 2), demonstrating the
convergent validity of the outer model.
Furthermore, we also evaluated the discriminant validity
using the Fornell-Larcker criterion and the heterotrait-mono-
trait ratio of correlations (HTMT). Table 3 illustrated that the
square roots of the average variance extracted for each variable
were larger than that of the inter-construct correlations
between the same concept and any other variable (Deng & Yu,
2023; Fornell & Larcker, 1981). It meant that the discriminant
validity of this study was satisfactory based on the Fornell-
Larker method. Moreover, to further verify the results above,
we yielded the heterotrait-monotrait ratio of correlations
(HTMT), the average of the inter-item correlations of the vari-
ables relating to those within the same concept (Henseler
et al., 2015). Table 4 illustrated that most of the HTMT values
were under 0.85 (Kline, 2015), with six values (spotted in the
correlations of BI-LE, HM-ISQ, HM-PE, ISQ-PE, PE-LE, PE-
PS) under 0.90 (Hair et al., 2019), which verified the model’s
discriminant validity.
4.2.2. The inner (structural) model
After all the criteria of the outer model were met, we further
assessed the inner model and examined the relationship
among seven variables. The evaluation of the inner model
involved several standards, including a collinearity test with
the Variance Inflation Factor (VIF), effect sizes and signifi-
cance levels of the paths, coefficient of determination (R
2
),
and the out-of-sample predictive power. These standards
were assessed by using the PLS-SEM method, as Hair and
Alamer (2022) described. Table 3 presented the results of
the VIF values, and it was found that all VIF values fell
below the standard of 5 (Hair et al., 2014). This demon-
strated the insignificant collinearity issues among the pre-
dictor variables in the model. Namely, the standard of
collinearity was already met.
After assessing collinearity, we analyzed the model’s
explanatory and predictive powers for outcome variables
and the effect sizes and significance levels of the 12 hypothe-
ses. We assessed the explanatory powers by R
2
using PLS
Algorithm analysis and the predictive powers by out-of-sam-
ple prediction performance using the “PLSpredict” proce-
dures. As Table 5 expressed, the R
2
values for behavioral
intention, learning effectiveness, perceived satisfaction, and
performance expectancy were 0.58, 0.75, 0.72, and 0.72,
respectively, revealing strong explanatory powers of this
model (Hair & Alamer, 2022). Regarding the out-of-sample
prediction performance, as Table 5 displays, the PLS-SEM
results in PLS were lower than or equal to that in the naïve
LM benchmark for the majority of items of behavioral
intention, learning effectiveness, perceived satisfaction, and
performance expectancy, except for the value of “LE2” and
“PE2.” Namely, the model had medium predictive powers
(Hair & Alamer, 2022).
Moreover, according to the Bootstrapping test in
SmartPLS 3.2.9, we calculated the effect sizes and signifi-
cance levels of the 12 hypotheses. As demonstrated in Table
6, information system quality and hedonic motivation
explained and predicted performance expectancy and per-
ceived satisfaction more significantly than self-regulation.
Similarly, performance expectancy had a stronger predictive
power than perceived satisfaction on behavioral intention.
Behavioral intention contributed to learning effectiveness
more than perceived satisfaction and performance expect-
ancy. Table 7 demonstrates the mediating effects of perform-
ance expectancy, perceived satisfaction, and behavioral
Table 3. The Fornell-Larcker test results.
BI HM LE PS PE SR ISQ
BI 0.89
HM 0.67 0.90
LE 0.82 0.71 0.87
PS 0.71 0.77 0.72 0.87
PE 0.74 0.80 0.79 0.81 0.89
SR 0.62 0.66 0.62 0.63 0.64 0.85
ISQ 0.65 0.78 0.70 0.75 0.79 0.63 0.84
Note. Square roots of AVEs are in bold on the diagonal line; BI: behavioral
intention; HM: hedonic motivation; LE: learning effectiveness; PS: perceived
satisfaction; PE: performance expectancy; SR: self-regulation; ISQ: information
system quality.
Table 4. The Heterotrait-Monotrait Ratio of correlations (HTMT).
BI HM LE PS PE SR ISQ
BI
HM 0.73
LE 0.90 0.77
PS 0.78 0.85 0.79
PE 0.81 0.88 0.86 0.90
SR 0.69 0.73 0.69 0.71 0.71
ISQ 0.71 0.85 0.77 0.84 0.86 0.71
Table 5. The assessment of the inner model.
Latent variables R
2
MV prediction summary
Items VIF PLS Q
2
LM Q
2
Behavioral intention BI1 2.90 0.58 0.59 0.42 0.60 0.40
BI2 2.93 0.58 0.40 0.59 0.39
BI3 2.42 0.66 0.35 0.66 0.34
BI4 2.85 0.61 0.41 0.61 0.43
Hedonic motivation HM1 2.86 NA NA NA NA NA
HM2 3.85 NA NA NA NA
HM3 3.69 NA NA NA NA
HM4 2.41 NA NA NA NA
Learning effectiveness LE1 2.84 0.75 0.56 0.46 0.56 0.46
LE2 2.97 0.64 0.38 0.63 0.40
LE3 2.93 0.61 0.43 0.62 0.41
LE4 3.52 0.56 0.48 0.56 0.47
LE5 2.58 0.55 0.43 0.55 0.42
Perceived satisfaction PS1 1.98 0.72 0.49 0.55 0.49 0.55
PS2 3.10 0.57 0.49 0.57 0.50
PS3 2.55 0.59 0.43 0.59 0.43
PS4 2.39 0.55 0.48 0.55 0.47
Performance expectancy PE1 2.91 0.72 0.44 0.54 0.44 0.55
PE2 3.07 0.48 0.56 0.47 0.57
PE3 2.42 0.49 0.55 0.51 0.52
PE4 2.73 0.44 0.59 0.45 0.57
Self-regulation SR1 2.48 NA NA NA NA NA
SR2 2.41 NA NA NA NA
SR3 2.15 NA NA NA NA
SR4 1.94 NA NA NA NA
Information system quality ISQ1 1.91 NA NA NA NA NA
ISQ2 2.68 NA NA NA NA
ISQ3 2.40 NA NA NA NA
ISQ4 2.49 NA NA NA NA
ISQ5 2.58 NA NA NA NA
Note. NA: Not applicable.
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 7
intention. An insignificant path was located from perceived
satisfaction to learning effectiveness. Consequently, behavior
intention fully mediated the effect of perceived satisfaction
on learning effectiveness. Figure 3 displays the finally estab-
lished model.
4.2.3. Qualitative findings
We adopted word lists, keywords in the context (KWIC),
and word cloud in the context tools embedded in the
AntConc 4.1.0 to gain students’ perceptions of the strengths,
weaknesses, and recommendations in language learning
assisted by ChatGPT. Table 8 depicts the four most frequent
items concerning the advantages, disadvantages, and sugges-
tions of ChatGPT for language learning. The four most fre-
quent language learning strengths in ChatGPT included (1)
convenience, (2) very good language learning experiences,
(3) efficiency, and (4) diverse resources. In contrast, the four
most frequent weaknesses represented (1) disappointing
accuracy and credibility, (2) difficult access and use, (3) stu-
dents’ overdependence, and (4) less intelligence. Participants
also mentioned ethical problems such as privacy. Moreover,
the suggestions embodied: (1) more humane, empathetic, or
smarter, (2) adding video, audio, picture, or dialogue, (3)
database or model update, and (4) popularizing use.
Moreover, critical or logical thinking, personal service, and
optimizing functions were suggested by ChatGPT-assisted
language learners. Figure 4 demonstrates the word cloud of
comments on the strengths, weaknesses, and recommenda-
tions of ChatGPT-assisted language learning.
5. Discussion
The study investigated the predictive effects of self-regula-
tion on performance expectancy and perceived satisfaction
in ChatGPT-assisted language learning. The hypotheses
examined were H1 and H4, which stated that the impact of
self-regulation on these variables would be smaller compared
to information system quality and hedonic motivation.
However, the results showed that self-regulation had limited
predictive power for perceived satisfaction and performance
expectancy in ChatGPT-assisted language learning. The find-
ing that self-regulation did not strongly influence perceived
satisfaction and performance expectancy in ChatGPT-
assisted language learning may be attributed to the highly
self-regulatory nature of the software system. ChatGPT
allows students to easily self-regulate their learning pace and
style, reducing the significance of self-regulation in influenc-
ing their satisfaction and performance expectancy. Previous
studies have recognized that self-regulation plays a crucial
role in learning outcomes across different settings
(Zimmerman, 1990). Nevertheless, in the context of
ChatGPT-assisted language learning, self-regulation did not
have a substantial impact on satisfaction and usefulness,
contradicting previous findings (Liaw & Huang, 2016).
Moreover, the study revealed that perceived satisfaction
did not directly predict the learning effectiveness of
ChatGPT-assisted language learning, aligning with studies
on e-book learning (Liaw & Huang, 2016). This unexpected
result may be attributed to various constraints and chal-
lenges faced by learners, such as difficult access and usage,
which hindered their actual language learning progress in
ChatGPT. Importantly, it should be noted that this study
was conducted during ChatGPT’s earliest period of use in
the country, which may have contributed to the unexpected
results regarding self-regulation and perceived satisfaction.
Students are still grappling with effectively harnessing
ChatGPT to enhance their language learning. Therefore,
future research and development efforts should acquire a
comprehensive understanding of the factors that influence
the success of e-learning systems, including ChatGPT (Al-
Fraihat et al., 2020). Additionally, optimizing the access and
Table 6. Results of research hypotheses.
No. Path Path coefficient
Confidence Interval
T Statistic p-Values Hypothesis Testing Results2.5% 97.5%
H1 SR !PE 0.11 0.03 0.22 2.45 0.014 Supported
H2 HM !PE 0.44 0.32 0.55 7.48 0.000 Supported
H3 ISQ !PE 0.37 0.26 0.48 6.66 0.000 Supported
H4 SR !PS 0.08 0.01 0.17 1.98 0.047 Supported
H5 HM !PS 0.24 0.09 0.38 3.33 0.001 Supported
H6 ISQ !PS 0.18 0.07 0.29 3.19 0.001 Supported
H7 PE !PS 0.43 0.29 0.56 6.00 0.000 Supported
H8 PE !BI 0.49 0.33 0.63 6.36 0.000 Supported
H9 PS !BI 0.31 0.18 0.44 4.69 0.000 Supported
H10 PE !LE 0.35 0.19 0.51 4.18 0.000 Supported
H11 PS !LE 0.07 −0.05 0.20 1.13 0.260 Not supported
H12 BI !LE 0.51 0.33 0.67 5.70 0.000 Supported
Table 7. Mediating effects.
Path Indirect effect Standard deviation
Confidence Interval
T Statistic p-Values2.5% 97.5%
SR !PE !PS 0.05 0.02 0.01 0.10 2.27 0.022
HM !PE !PS 0.19 0.04 0.11 0.27 4.67 0.000
ISQ !PE !PS 0.16 0.04 0.09 0.23 4.39 0.000
PE !BI !LE 0.25 0.05 0.15 0.35 4.75 0.000
PS !BI !LE 0.16 0.04 0.08 0.24 3.95 0.000
8 Q. CAI ET AL.
usage of ChatGPT will be beneficial for learners in enhanc-
ing their language learning experience.
Information system quality and hedonic motivation have
been identified as crucial factors in predicting performance
expectancy in ChatGPT-assisted language learning, as stated
in hypotheses H2 and H3. Previous research conducted by
Kohnke (2023) supports this, indicating that interacting with
ChatGPT through follow-up questions leads to enjoyable
experiences for language learners. Consistent with these
findings, Li et al. (2021) have demonstrated that perceived
enjoyment directly influences learners’ performance expect-
ancy and perceived usefulness in e-learning settings.
Moreover, the quality of the information system itself plays
a significant role in shaping learners’ expectations of desired
outcomes. Al-Fraihat et al. (2020) observe a positive
association between high-quality information systems and
learners’ expectancy of desired outcomes. Additionally,
Salloum et al. (2019) uncover a similar connection between
information quality, perceived enjoyment, and learners’ per-
formance expectancy in e-learning systems.
To enhance learners’ performance expectancy in
ChatGPT-assisted language learning, it is crucial to consider
both the quality of the information system and the motiv-
ation drawn from enjoyable experiences. By ensuring a high
standard of information system quality and promoting
hedonic motivation, language learners can have higher
expectations of their performance outcomes when utilizing
ChatGPT.
Information system quality, hedonic motivation, and per-
formance expectancy have been identified as crucial
Figure 3. The established model of learner attitudes towards ChatGPT-assisted language learning. Note. �Indicates the statistical significance of the path coefficient
at p<0.05, ��p<0.01, and ���p<0.001.
Table 8. Top four strengths, weaknesses, and recommendations of language learning in ChatGPT.
Freq. Strengths Freq. Weakness Freq. Recommendations
81 Convenience 49 Less accuracy and credibility 10 More humane, empathetic, or smarter
69 Very good language learning experiences 47 Difficult access and use 9 Adding video, audio, picture, or dialogue
55 Efficiency 15 Students’ overdependence 7 Database or model update
53 Diverse resources 12 Less intelligence 5 Popularize use
Figure 4. The word cloud of the comments on the strengths, weaknesses, and recommendations of ChatGPT-assisted language learning.
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 9
predictors of perceived satisfaction in ChatGPT-assisted lan-
guage learning, as stated in hypotheses H5, H6, and H7.
Furthermore, performance expectancy has been found to
partially mediate the effect of information system quality
and hedonic motivation on perceived satisfaction in
ChatGPT-assisted language learning. The existing literature
supports these findings, indicating that the information and
system quality of communication systems play a significant
role in students’ perceived satisfaction (Wut & Lee, 2022).
Additionally, students’ perception of enjoyment and fun dir-
ectly contributes to their overall satisfaction with the learn-
ing experience (Mu~
noz-Carril et al., 2021). Sayaf et al.
(2021) have also emphasized the impact of performance
expectancy on learners’ perceived satisfaction with digital
learning. Moreover, the effect of information system quality
and hedonic motivation on perceived satisfaction could also
be partially mediated by performance expectancy. This aligns
with previous research highlighting the importance of infor-
mation system quality (Al-Fraihat et al., 2020), performance
expectancy (Al-Fraihat et al., 2020), and hedonic motivation
(Mu~
noz-Carril et al., 2021) in shaping learner satisfaction in
technology-enhanced learning.
Performance expectancy has been found to have a stron-
ger predictive power than perceived satisfaction in terms of
behavioral intention in ChatGPT-assisted language learning,
as stated in hypotheses H8 and H9. The results indicate that
learners’ perceptions of usefulness play a more significant
role than perceived satisfaction in determining their inten-
tion to use ChatGPT for language learning. This finding is
consistent with previous research on e-book usage, where
performance expectancy was identified as a stronger pre-
dictor of behavioral intention than perceived satisfaction
(Liaw & Huang, 2016). Additionally, perceived satisfaction
has been found to partially mediate the effect of perform-
ance expectancy on the continual intention to use ChatGPT
for language learning. This is in line with the findings of
Poz�
on-L�
opez et al. (2021), who reveal that perceived satis-
faction can mediate the effect of performance expectancy on
MOOC user intention. These findings align with existing
research that emphasizes the significance of performance
expectancy and perceived satisfaction for students’ behav-
ioral intention in online learning environments (Al-Fraihat
et al., 2020; Al-Rahmi et al., 2019).
The behavioral intention was found to have a more sig-
nificant impact on learning effectiveness in ChatGPT-
assisted language learning than perceived satisfaction and
performance expectancy, as stated in hypotheses H10, H11,
and H12. This suggests that students’ intention to continue
using ChatGPT for language learning has a more significant
influence on their learning outcomes compared to perceived
satisfaction and performance expectancy. Research on e-
books has also shown that behavioral intention contributes
the most to learning effectiveness, followed by perceived sat-
isfaction and performance expectancy (Liaw & Huang,
2016). Although perceived satisfaction did not directly pre-
dict learning effectiveness in ChatGPT-assisted language
learning, Al-Sharafi et al. (2022) have highlighted that users’
satisfaction with a chatbot could stimulate their intention to
continue using it. Therefore, it is understandable that behav-
ioral intention fully mediated the effect of perceived satisfac-
tion on learning effectiveness in language learning using
ChatGPT. Additionally, our study found that behavioral
intention partially mediated the relationship between per-
formance expectancy and learning effectiveness in ChatGPT-
assisted language learning. These findings are consistent
with existing studies that emphasize the significance of per-
formance expectancy and behavioral intention in predicting
learning outcomes and effectiveness in e-learning (Wut &
Lee, 2022).
The findings of this research provide further confirm-
ation, enrichment, and guidance on the predictive influence
of various elements such as information system quality,
affective and cognitive factors, behavioral intention, and
learning effectiveness on learner attitudes towards ChatGPT-
assisted language learning. This study takes an interdisciplin-
ary approach, drawing on motivation, socio-cognitive theory,
theory of planned behavior, and the technology acceptance
model to explore learners’ decision-making processes regard-
ing their behaviors and attitudes towards ChatGPT-
enhanced language learning. It is worth noting that hedonic
motivation and information system quality deserve specific
attention from future developers, as optimizing learning
content and information services can enhance motivation
and support language learning in ChatGPT. The less signifi-
cant impact of self-regulation on language learners’ attitudes
towards ChatGPT-assisted language learning suggests that
future research should investigate the effect of other individ-
ual factors, such as learner interest, identity, and self-con-
cept. Furthermore, the lack of a direct effect of perceived
satisfaction on learning effectiveness emphasizes the impor-
tance of its related elements, namely performance expect-
ancy and behavioral intention, in determining learners’
effectiveness in ChatGPT-assisted language learning.
6. Conclusion
6.1. Major findings
Concerns have emerged regarding the potential threats and
risks associated with learners’ misusing ChatGPT, thereby
necessitating a further understanding of students’ attitudes
towards ChatGPT-assisted language learning. The current
research examines elements affecting learner attitudes
towards ChatGPT-assisted language learning based on the
extended three-tier technology use model from an interdis-
ciplinary perspective, including the technology acceptance
model, motivation, socio-cognitive theory, and the theory of
planned behavior. Information system quality and hedonic
motivation have greater predictive power over perceived sat-
isfaction and performance expectancy than self-regulation.
Behavioral intention better predicts learning effectiveness
than perceived satisfaction and performance expectancy.
Performance expectancy partially mediates the impact of
information system quality and hedonic motivation on per-
ceived satisfaction. Behavioral intention fully mediates the
relationship between perceived satisfaction and learning
effectiveness and partially mediates that between
10 Q. CAI ET AL.
performance expectancy and learning effectiveness in
ChatGPT-assisted language learning. The findings verify and
enrich the extended three-tier technology use model by
introducing hedonic motivation and learning effectiveness,
revealing learner attitudes and decision-making mechanisms
towards individual behaviors in the ChatGPT-assisted lan-
guage learning context.
6.2. Limitations
The current study had some limitations. First, given the lim-
ited education level of higher-education students in China
(although international students learning here also partici-
pated), the results in this study may need to be extended to
language learners at other educational levels with ChatGPT.
Second, this study is cross-section research, which cannot
tap into and reveal the dynamic and developing nature of
learning attitudes towards ChatGPT-assisted language learn-
ing. Third, the established model has medium predictive
power. Fourth, many of the features evaluated and discussed
may have become outdated since the study was conducted
using ChatGPT-3 or ChatGPT-3.5. Therefore, future
research should overcome these limitations.
6.3. Implications for future research and practice
This study adds the learning effectiveness layer to the three-
tier technology use model of Liaw (2008) to guide the model
establishment of the influencing factors of learner attitudes
towards ChatGPT-assisted language learning in higher edu-
cation. The research findings greatly inspire practical appli-
cations relevant to the technology acceptance model,
motivation, theory of planned behavior, and social cognitive
theory, which underpins the interdisciplinary perspectives of
the three-tier technology use model (Liaw, 2007). Therefore,
the implications for future practice are multi-faceted and
require specific attention. Firstly, given the more extraordin-
ary predictive powers of information system quality and
hedonic motivation than perceived satisfaction and perform-
ance expectancy in ChatGPT-assisted language learning,
developers may prioritize these aspects to enhance language
learners’ attitudes towards ChatGPT. Learners may plan
their behavior and rationally decide whether to accept
ChatGPT to facilitate language learning according to their
attitudes towards ChatGPT-assisted language learning.
Secondly, as teachers have an essential role in integrating
ChatGPT into language teaching and learning, it is vital that
teachers incorporate it in basic conversational learning, guiding
students to follow appropriate learning trajectories by techni-
ques such as keyword design, questioning methods, and com-
bined use with other software. This is in accordance with the
social cognitive theory emphasizing the role of social inter-
action and learning environment in developing cognitive abil-
ity. Thirdly, operators and developers need to optimize the
functions of ChatGPT so that artificial intelligence can better
understand human emotions and conversation styles. For
example, they can tap into a model to recognize human emo-
tions and optimize its understanding accuracy based on labeled
datasets training, context information, knowledge transfer
training, and users’ feedback. Operators and developers should
also address concerns, such as rational output, regarding the
ethical implications of emotions in artificial intelligence.
Finally, researchers and developers should prioritize eth-
ical principles to maximize benefits and minimize the poten-
tial for harm of ChatGPT-assisted language learning through
effective supervision and restriction mechanisms. Ethics and
morality testing can design test cases according to the ethical
framework formulated by Kamila and Jasrotia (2023),
including data privacy and security, bias and fairness, trans-
parency and explainability, human-AI interaction, and trust
and reliability. Data privacy and security involve the protec-
tion of personal information in ChatGPT. Bias and fairness
concern the prejudices and discrimination, which requires
ChatGPT’s feedback content of daily life and sensitive indus-
tries to keep equality between men and women in employ-
ment, equality of race, or equality of color. Transparency
and explainability require more visible, understandable, and
explainable working principles, data sources, decision-mak-
ing basis, and potential impact of ChatGPT to enhance peo-
ple’s judgment. Human-AI interaction necessitates a
supporting environment in which ChatGPT complements
instead of replacing human talents. Trust and reliability are
to preserve precision and objectivity in the feedback content
to ensure the degree of trust and confidence of users or
other stakeholders in ChatGPT.
The implications for future research directions in the
field of ChatGPT-assisted language learning are diverse and
require specific attention. Firstly, future researchers should
extend their studies to primary, junior high, and senior high
students, adult learners, and learners with disabilities in their
investigations. Secondly, researchers can conduct a longitu-
dinal follow-up study to capture the dynamic characteristics
of learner attitudes towards ChatGPT-assisted language
learning over time. By providing sufficient instructions on
ChatGPT use, researchers can ensure that more students
benefit from this technology in language education. Thirdly,
experimental methods are recommended for future research-
ers to verify learner attitudes towards ChatGPT-assisted lan-
guage learning. Comparative studies on the effectiveness of
different teaching modes, such as autonomous learning,
ChatGPT-led teaching, and in-person teaching, are also war-
ranted. Fourthly, it is suggested that researchers explore the
attitudes towards ChatGPT as a language teaching tool from
teachers’ perspectives. Fifthly, incorporating other individual
and contextual variables, such as self-efficacy and teacher
support, into the models would enrich the explanatory and
predictive powers of technology acceptance and usage in
educational practice, including the use of ChatGPT (Chiu
et al., 2023).
Authors’ contributions
Qianqian Cai: Methodology, Data curation, Formal analysis, Resources,
Investigation, Software, Validation, Roles/Writing – original draft,
Writing – review & editing; Yupeng Lin: Data curation, Writing –
review & editing; Zhonggen Yu: Conceptualization, Supervision, and
Funding acquisition.
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 11
Funding
This work is supported by the [Key Research and Application Project
of the Key Laboratory of Key Technologies for Localization Language
Services of the State Administration of Press and Publication,
“Research on Localization and Intelligent Language Education
Technology for the ‘Belt and Road Initiative”] under Grant [Number
CSLS 20230012]; and [Special fund of Beijing Co-construction Project-
Research and reform of the "Undergraduate Teaching Reform and
Innovation Project” of Beijing higher education in 2020-innovative
“multilingual þ” excellent talent training system] under Grant
[Number 202010032003].
Disclosure statement
No potential conflict of interest was reported by the author(s).
ORCID
Qianqian Cai http://orcid.org/0000-0002-5116-8845
Yupeng Lin http://orcid.org/0000-0002-3182-2459
Zhonggen Yu http://orcid.org/0000-0002-3873-980X
Availability of data and material
We make sure that all data and materials support our published claims
and comply with field standards.
Data availability statement
The data that support the findings of this study are openly available
in [OSF] at [https://osf.io/5d9te/?view_only=f73f253d38f643588ea31a
73bdd6376b].
Ethics approval statement
The study was approved by the institutional review board of Beijing
Language and Culture University. All researchers can provide written
informed consent.
References
Al-Emran, M., Arpaci, I., & Salloum, S. A. (2020). An empirical exam-
ination of continuous intention to use m-learning: An integrated
model. Education and Information Technologies, 25(4), 2899–2918.
https://doi.org/10.1007/s10639-019-10094-2
Al-Fraihat, D., Joy, M., Masa’deh, R., & Sinclair, J. (2020). Evaluating E-
learning systems success: An empirical study. Computers in Human
Behavior, 102(1), 67–86. https://doi.org/10.1016/j.chb.2019.08.004
Almahri, F. A. J., Bell, D., & Merhi, M. (2020, March). Understanding
student acceptance and use of chatbots in the United Kingdom uni-
versities: A structural equation modelling approach. In 2020 6th
International Conference on Information Management (ICIM) (pp.
284–288). IEEE. https://doi.org/10.1109/ICIM49319.2020.244712
Al-Rahmi, W. M., Yahaya, N., Aldraiweesh, A. A., Alamri, M. M.,
Aljarboa, N. A., Alturki, U., & Aljeraiwi, A. A. (2019). Integrating tech-
nology acceptance model with innovation diffusion theory: An empir-
ical investigation on students’ intention to use E-learning systems. IEEE
Access, 7, 26797–26809. https://doi.org/10.1109/ACCESS.2019.2899368
Al-Sharafi, M. A., Al-Emran, M., Iranmanesh, M., Al-Qaysi, N., Iahad,
N. A., & Arpaci, I. (2022). Understanding the impact of knowledge
management factors on the sustainable use of AI-based chatbots for
educational purposes using a hybrid SEM-ANN approach.
Interactive Learning Environments, 30, 1–20. https://doi.org/10.1080/
10494820.2022.2075014
Asante, I. O., Jiang, Y., Hossin, A. M., & Luo, X. (2023). Optimization
of consumer engagement with artificial intelligence elements on
electronic commerce platforms. Journal of Electronic Commerce
Research, 24(1), 7–28.
Ashfaq, M., Yun, J., Yu, S., & Loureiro, S. M. C. (2020). I, Chatbot:
Modeling the determinants of users’ satisfaction and continuance
intention of AI-powered service agents. Telematics and Informatics,
54, 101473. https://doi.org/10.1016/j.tele.2020.101473
Balakrishnan, J., Abed, S. S., & Jones, P. (2022). The role of meta-
UTAUT factors, perceived anthropomorphism, perceived intelli-
gence, and social self-efficacy in chatbot-based services?
Technological Forecasting and Social Change, 180, 121692. https://
doi.org/10.1016/j.techfore.2022.121692
Chao, C. M. (2019). Factors determining the behavioral intention to
use mobile learning: An application and extension of the UTAUT
model. Frontiers in Psychology, 10, 1652. https://doi.org/10.3389/
fpsyg.2019.01652
Chen, Y., Jensen, S., Albert, L. J., Gupta, S., & Lee, T. (2023). Artificial
intelligence (AI) student assistants in the classroom: Designing chat-
bots to support student success. Information Systems Frontiers,
25(1), 161–182. https://doi.org/10.1007/s10796-022-10291-4
Chiu, T. K. F., Moorhouse, B. L., Chai, C. S., & Ismailov, M. (2023).
Teacher support and student motivation to learn with Artificial
Intelligence (AI) based chatbot. Interactive Learning Environments.
Advance online publication. https://doi.org/10.1080/10494820.2023.
2172044
DeLone, W. H., & McLean, E. R. (1992). Information systems success:
The quest for the dependent variable. Information Systems Research,
3(1), 60–95. https://doi.org/10.1287/isre.3.1.60
Deng, X., & Yu, Z. (2023). An extended hedonic motivation adoption
model of TikTok in higher education. Education and Information
Technologies. Advance online publication. https://doi.org/10.1007/
s10639-023-11749-x
Ebadi, S., & Amini, A. (2022). Examining the roles of social presence and
human-likeness on Iranian EFL learners’ motivation using artificial
intelligence technology: A case of CSIEC chatbot. Interactive Learning
Environments. Advance online publication. https://doi.org/10.1080/
10494820.2022.2096638
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation
models with unobservable variables and measurement error. Journal
of Marketing Research, 18(1), 39–50. https://doi.org/10.2307/3151312
Foroughi, B., Senali, M. G., Iranmanesh, M., Khanfar, A., Ghobakhloo,
M., Annamalai, N., & Naghmeh-Abbaspour, B. (2023). Determinants
of intention to use ChatGPT for educational purposes: Findings from
PLS-SEM and fsQCA. International Journal of Human–Computer
Interaction. Advance online publication. https://doi.org/10.1080/10447
318.2023.2226495
Grant, C., & Osanloo, A. (2014). Understanding, selecting, and inte-
grating a theoretical framework in dissertation research: Creating
the blueprint for your “house”. Administrative Issues Journal
Education Practice and Research, 4(2), 12–26. https://dc.swosu.edu/
aij/vol4/iss2/4 https://doi.org/10.5929/2014.4.2.9
Harel, I., & Papert, S. (1990). Software design as a learning environ-
ment. Interactive Learning Environments, 1(1), 1–32. https://doi.org/
10.1080/1049482900010102
Hair, J., & Alamer, A. (2022). Partial Least Squares Structural Equation
Modeling (PLS-SEM) in second language and education research:
Guidelines using an applied example. Research Methods in Applied
Linguistics, 1(3), 100027. https://doi.org/10.1016/j.rmal.2022.100027
Hair, J. F., Hult, G. T. M., & Ringle, C. M. (2014). Partial least squares
structural equation modeling (PLS-SEM). SAGE Publications.
Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to
use and how to report the results of PLS-SEM. European Business
Review, 31(1), 2–24. https://doi.org/10.1108/EBR-11-2018-0203
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for
assessing discriminant validity in variance-based structural equation
modeling. Journal of the Academy of Marketing Science, 43(1), 115–
135. https://doi.org/10.1007/s11747-014-0403-8
Hsu, T. C., Huang, H. L., Hwang, G. J., & Chen, M. S. (2023). Effects
of incorporating an expert decision-making mechanism into
12 Q. CAI ET AL.
chatbots on students? Achievement, enjoyment. And Anxiety.
Educational Technology & Society, 26(1), 218–231. https://doi.org/10.
30191/ETS.202301_26(1).0016
Huang, W., Hew, K. F., & Fryer, L. K. (2022). Chatbots for language
learning – are they really useful? A systematic review of chatbot-
supported language learning. Journal of Computer Assisted Learning,
38(1), 237–257. https://doi.org/10.1111/jcal.12610
Joo, Y. J., Park, S., & Shin, E. K. (2017). Students’ expectation, satisfaction,
and continuance intention to use digital textbooks. Computers in
Human Behavior, 69(2), 83–90. https://doi.org/10.1016/j.chb.2016.12.025
Kamila, M. K., & Jasrotia, S. S. (2023). Ethical issues in the develop-
ment of artificial intelligence: Recognizing the risks. International
Journal of Ethics and Systems. Advance online publication. https://
doi.org/10.1108/IJOES-05-2023-0107
Kim-Soon, N., Ibrahim, M. A., Razzaly, W., Ahmad, A. R., & Sirisa,
N. M. X. (2017). Mobile technology for learning satisfaction among
students at Malaysian technical universities (MTUN). Advanced
Science Letters, 23(1), 223–226. https://doi.org/10.1166/asl.2017.7140
Kline, R. B. (2015). Principles and practice of structural equation model-
ing. Guilford publications.
Kline, R. B. (2023). Principles and practice of structural equation model-
ing. Guilford publications.
Kohnke, L. (2023). L2 learners’ perceptions of a chatbot as a potential
independent language learning tool. International Journal of Mobile
Learning and Organisation, 17(1/2), 214. https://doi.org/10.1504/
IJMLO.2023.128339
Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). ChatGPT for lan-
guage teaching and learning. RELC Journal, 54(2), 537–550. https://
doi.org/10.1177/00336882231162868
Lee, J. G., Lee, J., & Lee, D. (2021). Cheerful encouragement or careful
listening: The dynamics of robot etiquette at children’s different
developmental stages. Computers in Human Behavior, 118, 106697.
https://doi.org/10.1016/j.chb.2021.106697
Li, C., He, L., & Wong, I. A. (2021). Determinants predicting under-
graduates’ intention to adopt e-learning for studying English in
Chinese higher education context: A structural equation modelling
approach. Education and Information Technologies, 26(4), 4221–
4239. https://doi.org/10.1007/s10639-021-10462-x
Li, X., Xie, S., Ye, Z., Ma, S., & Yu, G. (2022). Investigating patients’
continuance intention towards conversational agents in outpatient
departments: Cross-sectional field survey. Journal of Medical
Internet Research, 24(11), e40681. https://doi.org/10.2196/40681
Liaw, S. S. (2007). Understanding computers and the Internet as a
work-assisted tool. Computers in Human Behavior, 23(1), 399–414.
https://doi.org/10.1016/j.chb.2004.10.018
Liaw, S. S. (2008). Investigating students’ perceived satisfaction, behav-
ioral intention, and effectiveness of e-learning: A case study of the
Blackboard system. Computers & Education, 51(2), 864–873. https://
doi.org/10.1016/j.compedu.2007.09.005
Liaw, S. S., & Huang, H. M. (2016). Investigating learner attitudes
towards e-books as learning tools: Based on the activity theory
approach. Interactive Learning Environments, 24(3), 625–643.
https://doi.org/10.1080/10494820.2014.915416
Lin, Y. P., & Yu, Z. G. (2023a). A bibliometric analysis of artificial
intelligence chatbots in educational contexts. Interactive Technology
and Smart Education. Advance online publication. https://doi.org/10.
1108/ITSE-12-2022-0165
Lin, Y. P., & Yu, Z. G. (2023b). Extending technology acceptance model to
higher-education students’ use of digital academic reading tools on
computers. International Journal of Educational Technology in Higher
Education. Advance online publication. 20(1), 34. https://doi.org/10.
1186/s41239-023-00403-8
Ma, L., & Lee, C. S. (2019). Investigating the adoption of MOOCs: A
technology user-environment perspective. Journal of Computer
Assisted Learning, 35(1), 89–98. https://doi.org/10.1111/jcal.12314
Malik, R., Shrama, A., Trivedi, S., & Mishra, R. (2021). Adoption of chat-
bots for learning among university students: Role of perceived conveni-
ence and enhanced performance. International Journal of Emerging
Technologies in Learning (iJET), 16(18), 200–212. https://www.learn-
techlib.org/p/220124/ https://doi.org/10.3991/ijet.v16i18.24315
Mu~
noz-Carril, P. C., Hern�
andez-Sell�
es, N., Fuentes-Abeledo, E. J., &
Gonz�
alez-Sanmamed, M. (2021). Factors influencing students’ per-
ceived impact of learning and satisfaction in Computer Supported
Collaborative Learning. Computers & Education, 174, 104310.
https://doi.org/10.1016/j.compedu.2021.104310
Pask, G. (1976). Conversation theory. Applications in education and
epistemology. Elsevier.
Pavlik, J. V. (2023). Collaborating with ChatGPT: Considering the
implications of generative artificial intelligence for journalism and
media education. Journalism & Mass Communication Educator,
78(1), 84–93. https://doi.org/10.1177/10776958221149577
Pillai, R., & Sivathanu, B. (2020). Adoption of AI-based chatbots for
hospitality and tourism. International Journal of Contemporary
Hospitality Management, 32(10), 3199–3226. https://doi.org/10.1108/
IJCHM-04-2020-0259
Poz�
on-L�
opez, I., Higueras-Castillo, E., Mu~
noz-Leiva, F., & Li�
ebana-
Cabanillas, F. J. (2021). Perceived user satisfaction and intention to
use massive open online courses (MOOCs). Journal of Computing in
Higher Education, 33(1), 85–120. https://doi.org/10.1007/s12528-020-
09257-9
Rospigliosi, P. A. (2023). Artificial intelligence in teaching and
learning: What questions should we ask of ChatGPT? Interactive
Learning Environments, 31(1), 1–3. https://doi.org/10.1080/
10494820.2023.2180191
Salloum, S. A., Alhamad, A. Q. M., Al-Emran, M., Monem, A. A., &
Shaalan, K. (2019). Exploring students’ acceptance of e-learning
through the development of a comprehensive technology accept-
ance model. IEEE Access, 7, 128445–128462. https://doi.org/10.
1109/ACCESS.2019.2939467
Sayaf, A. M., Alamri, M. M., Alqahtani, M. A., & Al-Rahmi, W. M.
(2021). Information and communications technology used in higher
education: An empirical study on digital learning as sustainability.
Sustainability, 13(13), 7074. https://doi.org/10.3390/su13137074
Sharples, M., Taylor, J., & Vavoula, G. (2005). Towards a theory of mobile learn-
ing. Proceedings of MLearn 2005 conference 25–28 October, South Africa.
Sparks, R., & Alamer, A. (2022). Long-term impacts of L1 language
skills on L2 anxiety: The mediating role of language aptitude and L2
achievement. Language Teaching Research. Advance online publica-
tion. https://doi.org/10.1177/13621688221104392
Stewart, A., Thrasher, A., Goldberg, J., & Shea, J. (2012). A framework
for understanding modifications to measures for diverse popula-
tions. Journal of Aging and Health, 24(6), 992–1017. https://doi.org/
10.1177/0898264312440321
Svikhnushina, E., & Pu, P. (2022). PEACE: A model of key social and
emotional qualities of conversational chatbots. ACM Transactions on
Interactive Intelligent Systems, 12(4), 1–29. https://doi.org/10.1145/
3531064
Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T.,
Huang, R., & Agyemang, B. (2023). What if the devil is my guardian
angel: ChatGPT as a case study of using chatbots in education.
Smart Learning Environments, 10(1), 1–24. https://doi.org/10.1186/
s40561-023-00237-x
Tseng, T. H., Lin, S., Wang, Y.-S., & Liu, H.-X. (2019). Investigating
teachers’ adoption of MOOCs: The perspective of UTAUT2.
Interactive Learning Environments, 30(4), 635–650. https://doi.org/
10.1080/10494820.2019.1674888
Wu, R., & Yu, Z. (2023). Do AI chatbots improve students learning
outcomes? Evidence from a meta-analysis. British Journal of
Educational Technology. Advance online publication. https://doi.org/
10.1111/bjet.13334
Wut, T. M., & Lee, S. W. (2022). Factors affecting students’ online
behavioral intention in using discussion forum. Interactive
Technology and Smart Education, 19(3), 300–318. https://doi.org/10.
1108/ITSE-02-2021-0034
Xia, Q., Chiu, T. K. F., Chai, C. S., & Xie, K. (2023). The mediating
effects of needs satisfaction on the relationships between prior
knowledge and self-regulated learning through artificial intelligence
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 13
chatbot. British Journal of Educational Technology, 54(4), 967–986.
https://doi.org/10.1111/bjet.13305
Yin, J., Goh, T. T., Yang, B., & Xiaobin, Y. (2021). Conversation tech-
nology with micro-learning: The impact of chatbot-based learning
on students’ learning motivation and performance. Journal of
Educational Computing Research, 59(1), 154–177. https://doi.org/10.
1177/0735633120952067
Zimmerman, B. J. (1990). Self-regulated learning and academic
achievement: An overview. Educational Psychologist, 25(1), 3–17.
https://doi.org/10.1207/s15326985ep2501_2
Zimmerman, B. J., & Schunk, D. H. (2011). Handbook of self-
regulation of learning and performance. Routledge/Taylor & Francis
Group.
About the authors
Qianqian Cai, is presently a doctoral student majoring in applied lin-
guistics of foreign languages at Faculty of Foreign Studies, Beijing
Language and Culture University, China. She has written over 10 first-
authored articles about technology-enhanced (language) education,
which are in consideration for publication in reputable international
journals.
Yupeng Lin, is presently a postgraduate majoring in linguistic studies
and applied linguistics of foreign languages at Faculty of Foreign
Studies, Beijing Language and Culture University, China. He has writ-
ten over 20 academic articles about technology-enhanced (language)
education and published five first-authored papers in reputable inter-
national journals.
Zhonggen Yu, Professor (distinguished) and Ph.D. Supervisor in
Faculty of International Studies, Beijing Language and Culture
University, China. He is a research fellow of several academic institu-
tions. He has published over 180 articles about technology-enhanced
(language) education in distinguished journals based on rich teaching
and research experiences.
Appendix
ChatGPT 用于语言学习的使用情况调查/An investigation
on the use of ChatGPT for language learning
Part 1: Informed consent and demographic information
1. 您愿意参加这项研究吗?/Would you like to participate in the
study?
w是/Yes w否/No
2. 您的性别是:/Your gender is?
w男/Male w女/Female
3. 您的年龄是?/How old are you?
w19岁以下/below 19 years old w20-22岁/20-22 years old
w23-25岁/23-25 years old w26岁以上/26 years old and above
4. 您的年级是:/Your grade is?
w大学生/Undergraduates w硕士生/Master’s students
w博士生/Doctoral candidates w其他/Others
5. 您的专业属于:/Your major belongs to:
w理科/Science w工科/Engineering w农学类/Agriculture
w医学类/Medicine w文科/Arts and humanities
6. 您使用ChatGPT的频率是?/How often do you use ChatGPT?
w从不/Never w偶尔/Seldom w有时/Sometimes w经常/Often
w总是/Always
Part 2: Seven subscales
SR
w非常不符合/strongly disagree w不符合/disagree w不确定/Neutral
w符合/ agree w非常符合/strongly agree
1、 ChatGPT是一款自我调节学习的工具/ ChatGPT is a self-regu-
lated learning tool.
2、 ChatGPT是一款自主学习的工具/ChatGPT is an active learning
tool.
3、 ChatGPT是一款便于个性化使用的学习工具/ChatGPT is a con-
venient learning tool for individual use.
4、 我可以自主调节ChatGPT里面的学习内容/ Learning contents in
ChatGPT are easy to regulate by myself.
ISQ
w非常不符合/strongly disagree w不符合/disagree
w不确定/Neutral w符合/agree w非常符合/strongly agree
5、 ChatGPT软件系统的响应速度很快/The ChatGPT software system
is very responsive.
6、 ChatGPT及时提供了我需要的学习内容与资源/ChatGPT provides
the learning contents and resources I need in time.
7、 ChatGPT提供的信息和教育服务多样化/ChatGPT provides various
information and services.
8、 ChatGPT的系统功能设置与操作简洁明了/System functions and oper-
ations in ChatGPT are simple and clear
9、 ChatGPT提供的信息资源与服务是有启发性和价值的/ChatGPT
provides information resources and services that are enlightening
and valuable
HM
w非常不符合/strongly disagree w不符合/disagree
w不确定/Neutral w符合/agree w非常符合/strongly agree
10、 使用ChatGPT学习语言是有趣的/Using ChatGPT to learn lan-
guage is fun.
11、 使用ChatGPT学习是享受的/Using ChatGPT to learn is enjoyable.
12、 使用ChatGPT学习是快乐的/Using ChatGPT to learn is entertaining.
13、 使用ChatGPT学习的过程中, 我学了很多有趣的事情/I’ve learned
interesting things in ChatGPT.
PS
w非常不符合/strongly disagree w不符合/disagree w不确定/Neutral
w符合/agree w非常符合/strongly agree
14、 我满意ChatGPT作为我的语言学习辅助工具/ I am satisfied with
using ChatGPT as a tool to enhance language learning.
15、 我满意ChatGPT现有的功能与教育服务/I am satisfied with the
functions and education services available in ChatGPT.
16、 我满意ChatGPT现有的学习资源/I am satisfied with the learning
resources available in ChatGPT.
17、 我满意ChatGPT的内容组织与呈现方式/I am satisfied with the
organization and presentation format of contents in ChatGPT.
PE
w非常不符合/strongly disagree w不符合/disagree
w不确定/Neutral w符合/agree w非常符合/strongly agree
18、 ChatGPT为我的语言学习提供了有用的学习资源与服务/ ChatGPT
provides useful learning resources and services for language learning.
19、 ChatGPT有助于实现我的学习目标/ChatGPT can promote my
learning goals.
20、 ChatGPT有助于我快速完成学习任务/ChatGPT helps me accom-
plish learning tasks more quickly.
21、 ChatGPT提供的学习内容与资源有助于增长我的见识/The learn-
ing contents and resources in ChatGPT are informative.
14 Q. CAI ET AL.
BI
w非常不符合/strongly disagree w不符合/disagree
w不确定/Neutral w符合/agree w非常符合/strongly agree
22、 未来我打算使用ChatGPT辅助我的语言学习/I intend to use
ChatGPT to enhance my language learning in the future
23、 我打算使用ChatGPT的学习内容辅助我的学习/I intend to use
the learning contents in ChatGPT to enhance my learning.
24、 我打算使用ChatGPT增强我的学习意愿/I intend to use ChatGPT
to enhance my learning intention.
25、 我打算将ChatGPT作为一个自主学习的工具/I intend to use
ChatGPT as an autonomous learning tool.
LE
w非常不符合/strongly disagree w不符合/disagree
w不确定/Neutral w符合/agree w非常符合/strongly agree
26、 ChatGPT 有助于提高我的语言学习效率/ChatGPT can improve
my language learning efficiency.
27、 ChatGPT 有助于提高学习成绩/ChatGPT can improve my learn-
ing performance.
28、 ChatGPT 有助于增强学习动机/ChatGPT can enhance my learn-
ing motivation.
29、 ChatGPT 有助于提升学习效能感/ChatGPT can enhance my
learning efficacy.
30、 ChatGPT可以提供更多学习资源/ChatGPT can provide more
learning resources.
Part 3: an open-ended interview question
请您谈谈ChatGPT在语言学习方面的优点、缺点以及您的建议。/Please
tell us about the advantages, disadvantages and suggestions of language
learning with ChatGPT.
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 15