Fig 1 - uploaded by Werner Robitza
Content may be subject to copyright.
User reactions to inserted conditions considering apprehensiveness. 

User reactions to inserted conditions considering apprehensiveness. 

Source publication
Conference Paper
Full-text available
User behavior is one of the key components of customer engagement and abandonment, which result from a good or bad Quality of Experience. However, methods to evoke and measure user behavior are still understudied. This paper presents an in-depth look at a study in which we measured user behavior during video streaming consumption in a controlled la...

Contexts in source publication

Context 1
... at the results it becomes apparent that the human perception of quality drastically changes when subjects are not required to rate audiovisual quality. To summarize our most important findings, which we will address in the following, Figure 1 shows the distribution of the user reactions and their apprehensiveness. ...
Context 2
... other words, subjects were not shy to admit the impact of the experiment context on their conduct. According to their self-descriptions, eight subjects acted apprehensively (i.e., a third of all participants, see also Figure 1). This behavior usually stems from a conscious process. ...

Similar publications

Article
Full-text available
Evaluating the Quality of Experience (QoE) of video streaming and its influence factors has become paramount for streaming providers, as they want to maintain high satisfaction for their customers. In this context, crowdsourced user studies became a valuable tool to evaluate different factors which can affect the perceived user experience on a larg...

Citations

... Therefore, the authors suggest ensuring that the application being tested offers benefits to the users such that other (extrinsic) incentives do not influence user behaviour. Similarly, [38] highlight the presence of demand characteristics (DC), i.e., where participants form an interpretation of the study's purpose and subconsciously change their behavior to fit the interpretation [44]. The authors instruct about DC and that it can lead to distorted results. ...
... Further, in order to evaluate how participants judged their understanding of the research hypothesis and whether they acted apprehensively, [38] recommend asking participants about what role they thought they played in the study. In the context of audio/video QoE, Robitza an Raake (2016) [39] suggest revealing the real purpose of the study only at the end in order to access user behavior in response to quality so as to change participants' mindsets from spotting quality degration. ...
... Similarly, Robitza and Raake (2016) [39] recommend recording the interaction of users with the system in the backgound in an unobtrusive manner. Further, it is also recommended to use crowdsourcing, passive large-scale measurements and longitudinal studies with users to allow for data collection in a natural environment [38]. In addition, Lebonte-LeMoyne (2018) suggest that it is necessary to plan for data loss in terms of time and participants recruitment as this may be common when increasing ecological validity. ...
Article
Full-text available
The concept of conducting ecologically valid user studies is gaining traction in the field of Quality of Experience (QoE). However, despite previous research exploring this concept, the increasing volume of studies has made it challenging to obtain a comprehensive overview of existing guidelines and the key aspects to consider when designing ecologically valid studies. Therefor this paper aims to provide a systematic review of research articles published between 2011 and 2021 that offer insight into conducting ecologically valid user studies. From an initial count of 782 retrieved studies, a final count of 12 studies met the predefined criteria and were included in the final review. The systematic review resulted in the extraction of 55 guidelines that provide guidance towards conducting ecologically valid user studies. These guidelines have been grouped within 8 categories (Environment, Technology, Content, Participant Recruitment, User Behavior, Study Design, Task and data collection) overarching the three main dimensions (Setting, Users and Research Methodology). Furthermore, the review discusses: the flip side of ecological validity, the implications for QoE research, as well as provides a basic visualisation model for assessing the ecological validity of a study. In conclusion, the current review indicates that future research should address more in detail how and when research approaches characterized by high ecological validity (and correspondingly, low internal validity) and those characterized by low ecological validity (and normally high internal validity) can best complement each other in order to better understand the key factors influencing QoE for various types of applications, user segments, settings. Further, we argue that more transparency around the (sub)dimensions of ecological validity with respect to a particular study or set of studies is necessary.
... In the Data analysis section, we detailedly described the observation of various behaviors based on screen recordings of the subjects. Example of related research in the field of user behavior testing can be found in the studies: [6,10,11,12,13]. The study's conclusion is that the experimental biases may occur when conducting user behavior and QoE assessments in a laboratory setting. ...
... However, it is di cult to generalize these observations, as only few participants used the interaction possibilities in the current study. This might result from the fact that the participants were told to watch a video as part of a payed crowdsourcing task or due to apprehensiveness to impact test results by interacting with the page [119]. The test setting may lead to an unnatural behavior, in particular, a stronger focus on potential video impairments and less natural interactions with the web page. ...
Thesis
Full-text available
Nowadays, employees have to work with applications, technical services, and systems every day for hours. Hence, performance degradation of such systems might be perceived negatively by the employees, increase frustration, and might also have a negative effect on their productivity. The assessment of the application's performance in order to provide a smooth operation of the application is part of the application management. Within this process it is not sufficient to assess the system performance solely on technical performance parameters, e.g., response or loading times. These values have to be set into relation to the perceived performance quality on the user's side - the quality of experience (QoE). This dissertation focuses on the monitoring and estimation of the QoE of enterprise applications. As building models to estimate the QoE requires quality ratings from the users as ground truth, one part of this work addresses methods to collect such ratings. Besides the evaluation of approaches to improve the quality of results of tasks and studies completed on crowdsourcing platforms, a general concept for monitoring and estimating QoE in enterprise environments is presented. Here, relevant design dimension of subjective studies are identified and their impact of the QoE is evaluated and discussed. By considering the findings, a methodology for collecting quality ratings from employees during their regular work is developed. The method is realized by implementing a tool to conduct short surveys and deployed in a cooperating company. As a foundation for learning QoE estimation models, this work investigates the relationship between user-provided ratings and technical performance parameters. This analysis is based on a data set collected in a user study in a cooperating company during a time span of 1.5 years. Finally, two QoE estimation models are introduced and their performance is evaluated.
... In the absence of recommended practices for such user behavior related measurements, such factors are not considered in the models reviewed in this paper with the exception of Mok et al. [61], where the authors take into account end user actions for the design of their model. Only few works so far have investigated the user behavior and its effect on the end-user QoE [110]. validation. ...
Article
Full-text available
With the recent increased usage of video services, the focus has recently shifted from the traditional quality of service-based video delivery to quality of experience (QoE)-based video delivery. Over the past 15 years, many video quality assessment metrics have been proposed with the goal to predict the video quality as perceived by the end user. HTTP adaptive streaming (HAS) has recently gained much attention and is currently used by the majority of video streaming services, such as Netflix and YouTube. HAS, using reliable transport protocols, such as TCP, does not suffer from image artifacts due to packet losses, which are common in traditional streaming technologies. Hence, the QoE models developed for other streaming technologies alone are not sufficient. Recently, many works have focused on developing QoE models targeting HAS-based applications. Also, the recently published ITU-T Recommendation series P.1203 proposes a parametric bitstream-based model for the quality assessment of progressive download and adaptive audiovisual streaming services over a reliable transport. The main contribution of this paper is to present a comprehensive overview of recent and currently undergoing works in the field of QoE modeling for HAS. The HAS QoE models, influence factors, and subjective test methodologies are discussed, as well as existing challenges and shortcomings. The survey can serve as a guideline for researchers interested in QoE modeling for HAS and also discusses possible future work.
... While the test's results look promising, it still did not allow users to interact with the playback system. Towards the goal of analyzing how users behave in response to streaming problems, we therefore developed a new test methodology, which is described in full detail in [17,16]. ...
... Second, there are still strong biases present in such behavioral methods, related to the subject acting "apprehensively" and the test situation limiting their freedom to interact naturally. We focused on this topic of experimental biases for behavioral tests in another publication [16], where we discussed methods for avoiding these biases as well as challenges for upcoming research. ...
Conference Paper
Full-text available
When users decide to cancel a video playback because the video is not loading, this is a first step in their abandonment of a service. Internet or video service providers however are strongly interested in engaging people to continue using their offers. In this work, we are investigating when, how, and why users are reacting to typical problems present in online video services, such as stalling or large quality variations. We develop new test methodologies that elicit and measure user behavior in different contexts (e.g., a laboratory environment or the user's home). We aim to identify the causes and implications of certain user behavior and its relation to user engagement in the long term. Also, for the industry, we create models and tools that can be used in quality monitoring scenarios, which will help to estimate Quality of Experience and user behavior in order to predict engagement and prevent abandonment.