Figure A3. The relationship between chatbot question asking depth and human self-disclosure depth over time. The raw data points are accompanied by a regression line and corresponding 95% confidence bands.

Figure A3. The relationship between chatbot question asking depth and human self-disclosure depth over time. The raw data points are accompanied by a regression line and corresponding 95% confidence bands.

Source publication
Article
Full-text available
This research analyses the use of language-based strategies in human-chatbot interactions, namely the use of self-disclosure, question asking, expressions of similarity, empathy, humour, and the communication competence of the chatbot. This study aims to discover whether humans and a social chatbot communicate differently. Furthermore, we analyzed...

Context in source publication

Context 1
... with the previous relationship, Figure A3 shows that the relationship between chatbot question asking depth and self-disclosure depth is not consistent over time. Somewhat unexpectedly, the model (BIC ¼ 535.19) shows a sizeable relationship between question asking depth and self-disclosure depth (F(1, 50.37) ¼ 11.98, p ¼ 0.001; b ¼ 0.29, SE ¼ 0.058; t(55.521) ...

Similar publications

Article
Full-text available
Developments in artificial intelligence (AI) have led to the emergence of new technologies offering unique business opportunities. This article examines the factors influencing AI-based chatbot implementation by small and medium enterprises (SMEs). We grounded the article's conceptual model in the technology-organization-environment (TOE) framework...

Citations

... The evolution of Large Language Models (LLMs) (Brown et al., 2020;Zhu, 2022) is notably enhancing the capabilities of user-bot interactions (Almansor and Hussain, 2020; Caldarini et al., 2022;Chaurasia et al., 2023;Lee et al., 2022;Croes et al., 2023). These advanced models possess a remarkable talent for producing fluent and coherent dialogues, particularly when prompted or inspired by user inputs. ...
... • (Brown et al., 2020;Zhu, 2022;Almansor and Hussain, 2020;Caldarini et al., 2022;Chaurasia et al., 2023;Lee et al., 2022;Croes et al., 2023). Further research needs to explore linguistic features and establish a standardized chatbot evaluation framework (Almansor and Hussain, 2020). ...
... Ethical considerations and sector-specific applications of NLP-powered chatbots are also critical (Chaurasia et al., 2023). Chatbots should enhance social communication skills and develop a 'Theory of Mind' to improve human-chatbot relationships (Croes et al., 2023). Evaluations of LLMs should incorporate interactive data instead of relying solely on standalone metrics (Lee et al., 2022). ...
Preprint
Large Language Models (LLMs) have significantly advanced user-bot interactions, enabling more complex and coherent dialogues. However, the prevalent text-only modality might not fully exploit the potential for effective user engagement. This paper explores the impact of multi-modal interactions, which incorporate images and audio alongside text, on user engagement in chatbot conversations. We conduct a comprehensive analysis using a diverse set of chatbots and real-user interaction data, employing metrics such as retention rate and conversation length to evaluate user engagement. Our findings reveal a significant enhancement in user engagement with multi-modal interactions compared to text-only dialogues. Notably, the incorporation of a third modality significantly amplifies engagement beyond the benefits observed with just two modalities. These results suggest that multi-modal interactions optimize cognitive processing and facilitate richer information comprehension. This study underscores the importance of multi-modality in chatbot design, offering valuable insights for creating more engaging and immersive AI communication experiences and informing the broader AI community about the benefits of multi-modal interactions in enhancing user engagement.
... It is assumed that an AI-based chatbot can significantly enhance the customer's trust through the numerous social cues that were established and described in the context of the Social Response Theory (Diederich, Brendel, and Kolbe 2020), and through the numerous social signals/stimuli that were defined and described in the Trust Signal Theory. Moreover, social signals in AI-based chatbots (e.g., in customer service) can help convey an interpersonal relationship of trust (Croes et al. 2022) and consistent with Palomino-Navarro and Arbaiza (2022) Consequently, the extent of the overall usefulness of the AI-based chatbot in terms of its character as a human is measured by the perception that the design is considered more useful (Diederich, Brendel, and Kolbe 2020) for trustworthy customer interaction. How helpful an AI-based chatbot seems to be in natural language interaction depends on how well it understands a customer's questions and how meaningful its answers are (Diederich, Brendel, and Kolbe 2020). ...
... Therefore, in line with previous studies (e.g., Zavolokina et al. [2020]), we propose that incorporating and providing visible trustsupporting design elements as (transparent and factual security) signals (stimuli) (e.g., trust seals) can help to promote interaction with an AI-based chatbot. Consistent with previous studies (e.g., Croes et al. [2022]; Palomino-Navarro and Arbaiza [2022]), our results show that MARA users perceived a higher level of social presence compared to the competing AI, which confirms hypothesis H2. Comparable to previous studies (e.g., Toader et al. [2019]), we can thus confirm that non-verbal trust-supporting design elements as (social) signals (stimuli) (e.g., name, gender, photo of a woman, emojis) can have a significant impact on customer trust regarding interaction with an AI-based chatbot. ...
... Users are expected to build emotional connections and intimate relationships with social chatbots through simulated affective communication (Shum et al., 2018). Croes et al. (2023) discovered that an increase in asking chatbots questions led to a corresponding rise in participants reciprocating these queries, subsequently leading to increased self-disclosure. Furthermore, the co-activity of a chatbot and the visualization of a conversational atmosphere have also been identified as advantageous factors in fostering a user's self-disclosure and the development of a trusting relationship with the chatbot (Pujiarti et al., 2022). ...
... More specifically, the intricate challenges and potential negative outcomes encountered by real users as they establish intimate bonds with social chatbots are insufficiently explored. Only a limited body of literature addresses the potential obstacles (e.g., Croes et al., 2023;Skjuve et al., 2022) and the adverse aspects of intimate human-chatbot relationships (e.g., Depounti et al., 2022;Laestadius et al., 2022). Yet, these studies have primarily focused on specific aspects such as lack of self-disclosure reciprocity (Croes et al., 2023), emotional dependence (Laestadius et al., 2022;Xie et al., 2023) and gender idealization (Depounti et al., 2022). ...
... Only a limited body of literature addresses the potential obstacles (e.g., Croes et al., 2023;Skjuve et al., 2022) and the adverse aspects of intimate human-chatbot relationships (e.g., Depounti et al., 2022;Laestadius et al., 2022). Yet, these studies have primarily focused on specific aspects such as lack of self-disclosure reciprocity (Croes et al., 2023), emotional dependence (Laestadius et al., 2022;Xie et al., 2023) and gender idealization (Depounti et al., 2022). In contrast, our study aims to provide a more comprehensive understanding of users' concerns regarding social chatbots by drawing upon the broader concept of uncertainty from interpersonal communication. ...
Article
Full-text available
Present-day power users of AI-powered social chatbots encounter various uncertainties and concerns when forming relationships with these virtual agents. To provide a systematic analysis of users' concerns and to complement the current West-dominated approach to chatbot studies, we conducted a thorough observation of the experienced uncertainties users reported in a Chinese online community on social chatbots. The results revealed four typical uncertainties: technical uncertainty, relational uncertainty, ontological uncertainty, and sexual uncertainty. We further conducted visibility and sentiment analysis to capture users' response patterns toward various uncertainties. We discovered that users' identification of social chatbots is dynamic and contextual. Our study contributes to expanding, summarizing, and elucidating users' experienced uncertainties and concerns as they form intimate relationships with AI agents.
... A follow-up analysis showed that as the AI chatbot responses demonstrate greater similarity to the user personality and conversation style via the self-learning algorithm, the customers also tend to adapt their communication style to be more similar to the bot. However, lower degrees of reciprocity on the part of customers mitigate the increasing similarity effect on the relationship progress with AI (Croes et al., 2022). Thus, while attraction and similarity could be factors in engendering human-AI relationships, additional relationship components (e.g., reciprocity) may be required for relationship development. ...
... (2022) find that a robot asking more questions helps to form relationships with children by reducing their uncertainty, while Croes et al. (2022) compare the AI self-disclosure to question-asking and find both to be uncertainty reduction strategies. ...
... Largely confirmed by empirical studies, this proposition suggests that increasing the number of social cues would elicit stronger social reactions due to the human tendency to use cognitive shortcuts and heuristics in communication (Von der Pütten et al., 2010). Since the main goal of social AI is facilitating the processes of interaction and communication, customers have expectations for the technology to not only exhibit social cues, but also demonstrate the theory of mind (ability to infer others' beliefs, intents, and desires) by referring to prior conversations and shared experiences and histories with the users (Croes et al., 2022). In a study from our sample, perceiving the mind in an AI service robot increases the robot's persuasiveness and customer willingness to follow its recommendations in the hospitality context (Abdi et al., 2022). ...
Article
Full-text available
Recent advancements in artificial intelligence (AI) and the emergence of AI-based social applications in the market have propelled research on the possibility of consumers developing relationships with AI. Motivated by the diversity of approaches and inconsistent findings in this emerging research stream, this systematic literature review analyzes 37 peer-reviewed empirical studies focusing on human-AI relationships published between 2018 and 2023. We identify three major theoretical domains (social psychology, communication and media studies, and human-machine interactions) as foundations for conceptual development, and detail theories used in the reviewed papers. Given the radically new nature of social AI innovation, we recommend developing a novel theoretical approach that would synergistically utilize cross-disciplinary literature. Analysis of the methodology indicates that quantitative studies dominate this research stream, while qualitative, longitudinal, and mixed-method approaches are used infrequently. Examination of research models and variables used in the studies suggests the need to reconceptualize factors and processes of human-AI relationship, such as agency, autonomy, authenticity, reciprocity, and empathy, to better correspond to the social AI context. Based on our analysis, we propose an integrative conceptual framework and offer directions for future research that incorporate the need to develop a comprehensive theory of human-AI relationships, explore the nomological networks of its key constructs, and implement methodological variety and triangulation. K E Y W O R D S AI companion, AI friendship, AI relationship, artificial intelligence, chatbot, conversational agent, digital assistant, literature review, social AI
... Studies have also indicated that self-disclosure may develop several ways during the HCR formation. Some suggest strengthened self-disclosure across HCR formation (Skjuve et al., 2021), while others suggest a reduction in self-disclosure (Croes and Antheunis, 2021;Croes et al., 2022). One study also indicated that self-disclosure might f luctuate throughout the HCR formation process . ...
... That is, they do not describe how self-disclosure changes over time or its purpose through the HCR. Recently, a few studies addressing this aspect have emerged (Croes and Antheunis, 2021;Skjuve et al., 2021;Croes et al., 2022;Skjuve et al., 2022). Croes and Antheunis (2021) conducted a longitudinal study to understand how HCR forms. ...
... They concluded that selfdisclosure decreased partly because Kuki failed at reciprocating. In a more recent study, Croes et al. (2022) analyzed the dialogues between Kuki and the participants from their previous longitudinal study (e.g. Croes and Antheunis, 2021). ...
Article
Full-text available
Self-disclosure in human–chatbot relationship (HCR) formation has attracted substantial interest. According to social penetration theory, self-disclosure varies in breadth and depth and is influenced by perceived rewards and costs. While previous research has addressed self-disclosure in the context of chatbots, little is known about users' qualitative understanding of such self-disclosure and how self-disclosure develops in HCR. To close this gap, we conducted a 12-week qualitative longitudinal study (n = 28) with biweekly questionnaire-based check-ins. Our results show that while HCRs display substantial conversational breadth, with topics spanning from emotional issues to everyday activities, this may be reduced as the HCR matures. Our results also motivate a nuanced understanding of conversational depth, where even conversations about daily activities or play and fantasy can be experienced as personal or intimate. Finally, our analysis demonstrates that conversational depth can develop in at least four ways, influenced by perceived rewards and costs. Theoretical and practical implications are discussed.
... Skjuve et al. (2022) and Xie and Pentina (2022) found users to have meaningful long-term relationships with the social chatbot Replika. Croes et al. (2022) conducted a longitudinal study on the social chatbot Kuki (formerly Mitsuku) and found that users were not inclined to form relationships with this chatbot. Therefore, based on the above discussion and to understand its impact on building attitudes, the following hypothesis has been proposed: ...
Article
Purpose The purpose of this study is to develop an empirical model by understanding the relative significance of interactive technological forces, such as chatbots, virtual try-on technology (VTO) and e-word-of-mouth (e-WOM), to improve interactive marketing experiences among consumers. This study also validates the moderating role of the perceived effectiveness of e-commerce institutional mechanism (PEEIM) as a moderator between attitude and continued intention. Design/methodology/approach Data were collected through personal visits and an online survey. The link to the survey questionnaire was shared on different social media platforms and social networking sites. A total of 362 responses obtained in the online and offline modes were considered for this study. Findings e-WOM emerged as the strongest predictor of attitude, followed by chatbots and VTO. The results of this study revealed that PEEIM did not moderate the relationship between attitude and continued intention. Originality/value Using the self-determination theory and behavioral reasoning theory as theoretical frameworks, this study is an initial endeavor in the online shopping context to empirically validate interactive forces like chatbots, VTO, e-WOM and PEEIM as moderators together to arrive at a holistic framework. These forces, in turn, act as significant contributors to online shopping satisfaction.
... Xiaoice is designed to serve an emotional connection with its users (Croes et al., 2022). Given that young people exhibit recurrent anxiety symptoms (Greer et al., 2019) and constitute a significant proportion of users on social media platforms (Panda & Jain, 2018), they are more likely to interact with Xiaoice frequently. ...
Article
This study investigates the impact of social interaction anxiety on compulsive chat with a social chatbot named Xiaoice. To provide insights into the limited literature, the authors explore the role of fear of negative evaluation (FONE) and fear of rejection (FOR) as mediators in this relationship. By applying a variance-based structural equation modeling on a non-clinical sample of 366 Chinese university students who have interacted with Xiaoice, the authors find that social interaction anxiety increases compulsive chat with a social chatbot both directly and indirectly through fear of negative evaluation and rejection, with a more substantial effect of the former. The mediating effect of fear of negative evaluation transfers through fear of rejection, which establishes a serial link between social interaction anxiety and compulsive chat with a social chatbot. Further, frustration about unavailability (FAU) strengthens the relationship between FOR and compulsive chat with a social chatbot (CCSC). These findings offer theoretical and practical insights into our understanding of the process by which social interaction anxiety influences chat behavior with a social chatbot.
Article
This article explores the ethical problems arising from the use of ChatGPT as a kind of generative AI and suggests responses based on the Human-Centered Artificial Intelligence (HCAI) framework. The HCAI framework is appropriate because it understands technology above all as a tool to empower, augment, and enhance human agency while referring to human wellbeing as a “grand challenge,” thus perfectly aligning itself with ethics, the science of human flourishing. Further, HCAI provides objectives, principles, procedures, and structures for reliable, safe, and trustworthy AI which we apply to our ChatGPT assessments. The main danger ChatGPT presents is the propensity to be used as a “weapon of mass deception” (WMD) and an enabler of criminal activities involving deceit. We review technical specifications to better comprehend its potentials and limitations. We then suggest both technical (watermarking, styleme, detectors, and fact-checkers) and non-technical measures (terms of use, transparency, educator considerations, HITL) to mitigate ChatGPT misuse or abuse and recommend best uses (creative writing, non-creative writing, teaching and learning). We conclude with considerations regarding the role of hu mans in ensuring the proper use of ChatGPT for individual and social wellbeing.