ArticlePDF Available

Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions

Authors:

Abstract and Figures

A new approach for industrial robot user interfaces is necessary due to the fact that small and medium sized enterprises are more interested in automation. The increasing number of robot applications in small volume production requires new techniques to ease the use of these sophisticated systems. In this paper shop floor operation is in the focus. A Flexible Graphical User Interface is presented which is based on cognitive infocommunication (CogInfoCom) and implements the Service Oriented Robot Operation concept. The definition of CogInfoCom icons is extended by the introduction of identification, interaction and feedback roles. The user interface is evaluated with experiments. Results show that a significant reduction in task execution time and a lower number of required interactions is achieved because of the intuitiveness of the system with human centered design.
Content may be subject to copyright.
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
135
Evaluation of Flexible Graphical User Interface
for Intuitive Human Robot Interactions
Balázs Dániel1, Péter Korondi1, Gábor Sziebig2, Trygve
Thomessen3
1 Department of Mechatronics, Optics, and Mechanical Engineering Informatics,
Budapest University of Technology and Economics, PO BOX 91, 1521 Buda-
pest, Hungary
e-mail: daniel@mogi.bme.hu; korondi@mogi.bme.hu
2 Department of Industrial Engineering, Narvik University College, Lodve Langes
gate 2, 8514 Narvik, Norway
e-mail: gabor.sziebig@hin.no
3 PPM AS, Leirfossveien 27, 7038 Trondheim, Norway
e-mail: trygve.thomessen@ppm.no
Abstract: A new approach for industrial robot user interfaces is necessary due to the fact
that small and medium sized enterprises are more interested in automation. The increasing
number of robot applications in small volume production requires new techniques to ease
the use of these sophisticated systems. In this paper shop floor operation is in the focus. A
Flexible Graphical User Interface is presented which is based on cognitive
infocommunication (CogInfoCom) and implements the Service Oriented Robot Operation
concept. The definition of CogInfoCom icons is extended by the introduction of
identification, interaction and feedback roles. The user interface is evaluated with
experiments. Results show that a significant reduction in task execution time and a lower
number of required interactions is achieved because of the intuitiveness of the system with
human centered design.
Keywords: industrial robotics; human robot interaction; cognitive infocommunication;
flexible robot cell; graphical user interface; usability
1 Introduction
In the era of touch screen based smartphones the interaction between humans and
machines is becoming a frequent event. Especially the communication between
people and robots is a field of research nowadays. Human Robot Interaction (HRI)
studies are aiming the user friendly application of different robot systems for
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
136
cooperation with humans. This includes an interactive conversation robot which
collects multi-modal data and bonds with human partners. [1]
While the interaction with an industrial robot is traditionally considered as a
Human Machine Interaction (HMI) due to its inferior level of autonomy and
complexity [2], these systems are also spreading to human environments. Shared
workspace [3], teach by demonstration [4], all these improvements in industrial
robotics require a higher level of communication. The need for better and more
intuitive user interfaces for industrial manipulators is increasing.
The reason of the increasing interest is that a wide range of Small and Medium
Sized Enterprises (SMEs) are motivated to invest in industrial robot systems and
automation due to increasing cost levels in western countries. The last years a
significant part of robot installations in industry required high flexibility and high
accuracy for low-volume production of SMEs [5]. This tendency adds new
challenges to support and transfer knowledge between robotics professionals and
SMEs. The difficulty of operating a robotic manipulator is present both on shop-
floor production and remote support for the robot cell. Operators with less
expertise will supervise the robot cells in SMEs because of economical reasons
and there is usually no dedicated robot system maintenance group for small
companies. Remote operation of the industrial robot systems from the system
integrator’s office offers an alternative solution. This is based on the premise that
if one can perform remote assistance for the SMEs, the enterprise can get high
reliability, high productivity, faster and cheaper support irrespective of
geographical distance between the enterprise and the system integrator or other
supportive services. On the other hand the need for assistance from professionals
can be reduced by introducing such user/operator interfaces which fit better for
inexperienced users and provide an intuitive and flexible manner of operation.
The paper is organized as follows: Section 2 briefly summarizes the properties of
Human Robot Interaction for industrial robots. Section 3 presents a Flexible
Graphical User Interface implementation test with quantitative and qualitative
results.
2 Human Robot Interaction
Scholtz [2] proposed five different roles for humans in HRI. Supervisor, operator,
mechanic, teammate and bystander; these define the necessary level of
information exchange from both sides. In order to keep the interaction continuous
the switch between roles is inevitable in certain situations. Efficient user interfaces
have to take into account the adequate display of details in complex systems like
robot cells.
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
137
Automated systems may be designed in machine-centered or human-centered
way [6]. While the first technique requires high level knowledge from the user for
operation, the latter is more adaptable for flexible systems which serves both
seasoned and unseasoned personnel. Standard HRI for industrial equipment is
usually utilize machine-centered approach whereas complex systems are
accessible through complicated user interfaces.
Human-centered design applies multidisciplinary knowledge, thus combining the
necessary factors and benefits both for machinery and human operation. The ISO
Standard 9241-210:2010 [7] defines it as an activity to increase productivity in
parallel with improved work conditions by the use of ergonomics and human
factors. User friendly or human friendly systems are designed with the aim of
minimizing the machine factor. This implies that in this case robots have to adapt
to the operator, but the difficulty is to provide a comprehensive and appropriate
human behavior model [8].
Human factors include psychological aspects also. HRI should be analyzed from
this point of view to make a better match between robot technology and humans.
The systematic study of the operator's needs and preferences is inevitable [9] to
meet the expectations on psychological level, moreover, in order to satisfy these
needs, multimodal user interfaces are necessary which may facilitate the
communication with intuitive and cognitive functions [9, 10].
Figure 1
Modern Teach Pendants.
From top-left corner: FANUC iPendant [11], Yaskawa Motoman NX100 teach pendant [12], ABB
FlexPendant [13], KUKA smartPad [14], ReisPAD [15], Nachi FD11 teach pendant
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
138
For industrial robotic manipulators the standard user interface is the Teach
Pendant. It is a mobile, hand-held device usually with customized keyboard and
graphical display. Most robot manufacturers are developing distinctive design
(See Figure 1) and are including a great number of features thus increasing the
complexity and flexibility.
Input methods vary from key-centric to solely touch screen based. Feedback and
system information is presented on graphical display in all cases, however the
operator may utilize a great number of other information channels; touch, vision,
hearing, smell and taste. The operator can also take an advantage of the brain’s
ability to integrate the information acquired from his or her senses. Thus, although
the operator has the main sight and attention oriented towards the robot’s tool, she
will immediately change her attention to any of the robot’s link colliding with an
obstacle when it is intercepted by her peripheral vision.
The science of integrating informatics, info-communication and cognitive-
sciences, called CogInfoCom is investigating the advantages of combining
information modalities and has working examples in virtual environments,
robotics and telemanipulation [10, 16]. The precise definition and description of
CogInfoCom is presented in [17], terminology is provided in [18], while [19]
walks the reader through the history and evolution of CogInfoCom.
Introducing this concept in the design of industrial robot user interfaces is a
powerful tool to meet the goal of human-centred systems and a more efficient
human robot interaction.
3 Flexible Graphical User Interface
At first the focus is laid on the possible improvements regarding the existing
Teach Pendant information display. The traditional industrial graphical user
interfaces are designed to deal with a large number of features, thus the
organization of menus and the use of the system is complex and complicated.
In a real life example the robot cell operator had to ask assistance from the system
integrator in a palletizing application because the manipulator constantly moved to
wrong positions. Without clear indication on the robot's display the user could not
recognize that the programmed positions are shifted due to the fact that the system
is in the middle of the palletizing sequence. The resetting option was in two menu-
levels deep in the robot controller's constant settings. In case of a flexible user
interface this could have been avoided by providing custom surface for palletizing
instead of showing the robot program, the date, etc.
In most cases the customization of the robot user interface is possible. A separate
computer and software is required [20] and due to the complexity of the system it
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
139
is done by the system integrator. Naturally, the integrator cannot be fully aware of
the needs of the operator thus the flexibility of a robot cell is depending also on
the capabilities of the framework in which the graphical user interface is working.
The communication barrier between the operator and the integrator influences the
efficiency of the production through inadequate and non-optimized information
flow (See Figure 2). Differences in competence level, geographical distance or
cultural distance can all cause difficulties in overcoming this barrier.
The Flexible Graphical User Interface (FGUI) concept aims to close this gap. The
system integrator still have the opportunity to compile task specific user interfaces
but the framework includes pre-defined, robot specific elements which are at the
operators' hands at all times. The functionality is programmed by the integrator
and represented as she thinks it is the most fitting, but the final information
channel can be rearranged by the user/operator to achieve efficient and human
(operator) centered surface.
Figure 2
Connection of a robot system with the operator and the integrator
3.1 Shop-floor operation
During shop-floor operation the user should communicate on high level with the
robot cell. In high volume and highly automated production lines this interaction
is restricted the best to start a program at the beginning of the worker's shift and
stop it when the shift is over. For SMEs this approach is not feasible: frequent
reconfiguration and constant supervision is inevitable. Therefore a service-
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
140
oriented user interface is favourable which hides the technical details of a
multipurpose robot cell behind an abstract level. This abstraction should be made
by utilizing the principles of CogInfoCom and allowing intuitive use of a robot
cell.
The connections between the actual robot programs and the operator's user
interface are presented in Figure 3. As the system keeps track of the parts in a pick
and place operation this data is stored as a set of variables. The operator can
observe these values by traditional display items but a FGUI allows to present it in
a different way: small lamps are giving visual information of the system's state.
The operator instinctively and instantly can compare the actual state of the robot
cell with the information stored in the robot controller and detects defects faster.
This is an application of representation-bridging in CogInfoCom, because the
information on the robot variable is still visual but the representation is different
from standard robot variable monitoring. The direct observation of variables
would require the knowledge of the system integrator's convention of data
representation from the operator.
Figure 3
Connections between functionality and service-oriented layers
The selection of programs for the operation is reduced to service requests initiated
by button presses paired with an image of the part. Using CogInfoCom
semantics [18]; the images of parts, the buttons, the indicator lamps, the progress
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
141
bar and the text box are all visual icons bridging the technical robot data with
operation parameters and statuses. The high-level message generated by these
entities is the list of achievable item movements in case of a pick and place service
and feedback on the current operation provided by the user interface not from
visual observation of the cell thus we consider this application of CogInfoCom
channel as an eyecon message transferring system in general.
Furthermore, considering a group of screen elements (e.g. the image of the gear,
the buttons "Table 1" and "Table 2" and the indicator lamps) we use low-level
direct conceptual mapping. The picture of the part generates an instant
identification of the robot service objective and the surface to interact with the
robot controller. The buttons represent identification and also interaction with the
robot cell. The robot system internal mapping of real world is sent back to the user
through the indicator lamps in feedback form.
These applications indicates that CogInfoCom icons usually not only generate and
represent a message in the communication but these also have roles. These roles
may depend on the actual implementation and concept transmitted by messages.
In human robot interaction three main roles may be distinguished:
identification role,
interaction role,
feedback role.
The simple instruction "Move the gear from Table 1 to Table 2" given to the
operator does not require additional mapping from the user thus the human robot
interaction is simplified and became human centered. The gear may be identified
by image, manipulated by interaction with the button and get feedback though the
indicator lamp.
Practically the user interface implements an abstract level between the technical
realization of operations and the service oriented operation. A main robot program
is monitoring the inputs from the user. By pressing one of the buttons a number is
loaded into a variable which causes to controller to change to the program
indicated by this number. The program contains the pre-defined pose values to be
executed sequentially. The current program line number divided by the total
number of lines indicates the progress with task for the user in the progress bar,
since one program contains only one item movement from one table to another.
Exiting the main program also sets the text box to "Operation in progress" and re-
entering it resets to "Standby".
The robot controller keeps track of objects in the robot cell by adjusting an integer
variable according to the current position of it in the motion sequence. Values 1, 2
and 0 represent the item placed on Table 1, placed on Table 2, and lifted and
moved by the robot, respectively. Mapping this information onto the indicator
lamps means that when the object is on one of the tables, the lamp next to the
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
142
table button lights up, sending the message that this table is occupied. When it is
in the air both lamps are dimmed.
It is generally important to rate the success of a concept and its implementations
with experiments. The assessment in communication improvement is may be
measured both quantitatively and qualitatively.
3.2 Concept evaluation with user tests
Justification of the service-based flexible user interface is performed by a series of
testing with real users. It was a comparative test of a traditional and very
conservative system and a newly developed flexible graphical user interface. The
robot cell was set up for two different services which are the following:
pick and place operation with two positions on three work pieces,
parameterizable delivery system for bolting parts.
3.2.1 Test description
Test participants executed two main tasks first using the traditional system
(Figure 4) then repeating them using the flexible user interface (Figure 5 and
Figure 6). During the tests user interactions with the robot system were recorded
by cameras, key logging, and screen capturing.
Figure 4
Traditional Graphical User Interface
At the beginning of each task a briefing was given to the participants on the
general aim of the executed task (Table 1). After that they read through a
summary on the necessary settings (Table 2 and Table 3) to be made before the
robot can perform its operation. At this point participants had the opportunity to
go on with a user manual with step-by-step instruction on the necessary steps or
continue on their own. Chosen the second option they always had the chance to
refer to the manual if they needed suggestions on how to continue.
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
143
Figure 5
Flexible Graphical User Interface for Task 3
Figure 6
Flexible Graphical User Interface for Task 4
During the execution of Task 1 the only action needed was to select the
appropriate program for the robot based upon the required part movement. Task 2
was more complicated; three internal variables had to be set according to the
number of nuts and bolts and the number of the delivery box.
The participant was provided with the new user interface for the execution of
Task 3 and Task 4 thus the direct interaction with robot controller variables and
setting were obscured.
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
144
Table 1
General aims of tasks
Task ID Description Interface1
Task 1.1 Move the pipe from Table 1 to Table 2
Task 1.2 Move the cutting head from Table 2 to Table 1 TGUI
Task 2 Deliver two bolts and three nuts to Box1 TGUI
Task 3.1 Move the pipe from Table 2 to Table 1
Task 3.2 Move the cutting head from Table 1 to Table 2 FGUI
Task 4 Deliver two bolts and three nuts to Box 1 FGUI
Table 2
Settings for Task 1
Item Source Destination Program Number
Table 1 Table 2 816
Pipe Table 2 Table 1 815
Table 1 Table 2 814
Cutting head Table 2 Table 1 813
Table 1 Table 2 812
Gear Table 2 Table 1 811
Table 3
Settings for Task 2
Parameter Variable Value
Program Program Number 821
Number of nuts Integer Nr. 014 3
Number of bolts Integer Nr. 015 2
Delivery box ID Integer Nr. 016 1
3.2.2 Results
The evaluation was conducted with four participants, all male, between age 25 and
27. All four have engineering background; two have moderate, two have advanced
experience in robot programming. Participants were advised that audio and video
is recorded which serves only for scientific analysis and they were assured that the
test can be interrupted anytime on their initiation.
All participants were able to execute all of the tasks in a reasonable amount of
time. Two users reported difficulties to set the program number during Task 1.2.
The problem turned out to be a software bug in the traditional robot controller user
interface; the user manual had been modified accordingly, although none of the
1 TGUI: Traditional Graphical User Interface, FGUI: Flexible Graphical User Interface
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
145
participants followed the user manual steps strictly, most likely due to their
previous experience with robots.
Figure 7
Synchronised videos had been taken during testing
Collected data have been evaluated after the tests. Three parameters have been
selected to represent the difference between the traditional and the flexible
approach: task execution duration, number of interactions and ratio of number of
key presses to touch screen commands.
Execution time is measured between the first interaction with the Teach Pendant
recorded by the key logger and mouse logger running on the robot controller, and
the last command which ordered the robot to move. The last interaction was
determined by time stamp on the video in case of the TGUI measurement, since
logging of program start button on the controller housing was not in place at the
time of the experiment. The start of robot movement could be determined from
mouse logging data in case of FGUI.
The mean execution time using the conservative system for Task 1.1, Task 1.2 and
Task 2 are 36, 37 and 135 seconds respectively. In contrast, Task 3.1, Task 3.2
and Task 4 duration was 31, 0 and 30 seconds with the use of the FGUI. Data are
presented in Table 4. The total mean time spent on Task 1 and Task 3 are 72,1 and
38,1 second respectively. All the mean values presented based on three
measurements. Due to the low number of measurements further statistical data
were not computed.
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
146
Table 4
Task execution time
Task duration [s]
1.1 1.2 2 3.1 3.22 4
User 1 50,0 53,5 131,5 64,5 0 21,7
User 2 25,2 30,9 121,0 27,9 0 11,1
User 3 31,6 25,1 151,3 21,8 0 22,4
Mean 35,6 36,5 134,6 38,1 0 18,4
User 43 N/A N/A 126,3 9,4 0 63,8
Participants had to configure the robot controller using the Teach Pendant which
offered two possible interactions: key presses and touch screen interactions. A
virtual representation of the keyboard is depicted in Figure 7. Besides, users had to
deactivate the joint brakes and energize the motors as well as start the robot
program by pressing buttons on the robot controller housing. All interactions were
counted by the logging software and later the touch interaction ratio to key presses
was calculated as follows:

,100%
controllerkeypresstouch
touch nnn n
TR (1)
where ntouch is the number of touch screen interactions, nkeypress is the number of
button presses on the Teach Pendant and ncontroller is the number of button presses
on the controller housing. The total number of interactions and touch ratio is listed
in Table 5. The data loss in execution time measurement did not affect counting of
interactions thus mean values are calculated from four samples.
Table 5
Number of interactions and touch ratio
Interactions [-] (Touch ratio [%])
1.1 1.2 2 3.1 3.2 4
User 1 7 (14) 21 (19) 57 (11) 7 (71) 1 (100) 10 (100)
User 2 7 (14) 10 (30) 33 (15) 6 (50) 1 (100) 6 (100)
User 3 7 (14) 14 (50) 53 (15) 6 (83) 1 (100) 8 (100)
User 4 7 (14) 13 (23) 26 (17) 3 (67) 1 (100) 7 (71)
Mean 7 (14) 15 (31) 42 (15) 6 (68) 1 (100) 8 (93)
2 Task duration is zero because the first interaction already started the task execution for
the robot.
3 Due to data corruption Task 1.1 and Task 1.2 duration is not available. Mean value is
calculated with three measurements in all cases.
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
147
3.2.3 Discussion
This testing aimed to evaluate the new concept of service oriented flexible user
interface. No particular selection was in place for the participants and the size of
dataset is not wide enough for statistical analysis and true usability
evaluation [21]. However, trends and impressions can be synthesized on the
difference in user performance. Inspection of the video recordings shows the
participants did not understand fully that Task 1 and Task 3 as well as Task 2 and
Task 4 were the same except the fact that they have to use different user interface
during execution. Although in the series of actions they did not follow the step-by-
step instruction of the user manual, participants returned to it over and over again
to get acknowledgement on their progression. An excessive number of interactions
is present against the number of interactions required by the user manual
(Table 6).
Table 6
Comparison of required and performed interactions
Required Performed Difference Difference [%]
Task 1.1 7 7 0 0%
Task 1.2 9 15 +6 +67%
Task 2 25 42 +17 +68%
Task 3.1 3 6 +3 +100%
Task 3.2 1 1 0 0%
Task 4 6 8 +2 +33%
Total 51 79 +28 +55%
At the beginning the state of the user interface and the controller was the same in
every test but at the start of Task 1.2 this situation changed due to the different
preferences of the users on how to stop a robot program. The significant
difference for Task 2 is caused by the fact that there are several ways of inputting
a variable in the traditional GUI but the shortest (based upon the robot
manufacturer's user manual) was not used by either of the participants.
The excess number of interactions for Task 3 are the actions to dismiss messages
on the flexible user interface caused by previous action of the users. The increased
number of touches in Task 4 are due to a usability issue: the display item for
selecting the amount of parts to be delivered was to small thus the selection could
not be made without repeated inputs. Verdict of this investigation is that users
tends to use less efficient ways to set up the robot controller which may induce
errors and execution time increases due to the need of recuperating from errors.
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
148
Figure 8
Interaction pattern with traditional (Task 2) and flexible, intuitive system (Task 4)
The intuitiveness of the new approach can be proven with the examination of
interaction patterns. Figure 8 shows the interactions of User 1 in details. The user
input for the traditional system comes in bursts. The slope of each burst is close to
the final slope of the FGUI interaction (Task 4 in Figure 8) but the time between
these inputs decreases the overall speed of setting. Recordings show that this time
in the case of Task 2 is generally spent on figuring out the next step to execute
either by looking in the manual or searching for clues on the display. Finally the
press of the start button for Task 2 is delayed because the user double-checked the
inputted values.
In contrast the inputs for Task 4 are distributed in time uniformly and the delay
between the interactions is significantly lower. The user did not have to refer to
the user manual because the user interface itself gave enough information for
setup. This means that this composition of user interface which is made
specifically for this service of the robot cell offers a more intuitive and easy-to-use
interface than the traditional one and that CogInfoCom messages were transmitted
successfully.
The overall picture shows that FGUI performs significantly better than TGUI
(See Figure 9). For comparison the execution time of Task 3 against Task 1 was
shortened by 34 seconds while the duration of Task 4 against Task 2 was shorter
by 116 seconds. Regarding the necessary interactions with system FGUI reduced
it to 23,4% giving around a quarter of possibilities for errors.
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
149
Figure 9
Final results showing improved performance of FGUI: reduced task execution time, decreased number
of interaction and increased ratio of touch screen usage to key presses
The Flexible Graphical User Interface also increased the use of touch screen
significantly (e.g. from 15% to 93% for Task 2 and Task 4). Participants reported
that the use of images and the composition of the UI helped them to connect easier
the parameters given by the instructions and the necessary actions to input these
values into the controller. This is the result of deliberate design of the abstract
level connected to the robot cell's service; the design is based upon the principle of
CogInfoCom messages to ensure human centred operation.
Concluding the discussion it is stated that this Flexible Graphical User Interface is
evaluated as expected; users were able to operate the robot cell faster, more
intuitively and with greater self-confidence. Due to the low number of participants
further verification is necessary; organisation of a new test for deeper usability
and intuitiveness investigation (including non-expert users) is underway at the
time of writing this paper and results will be published in later papers.
Conclusions
The Flexible Graphical User Interface implementation based on Service Oriented
Robot Operation was presented. The application of CogInfoCom principles is
described and a new property for the CogInfoCom icon notion was introduced.
This new property is the role of icon in message transfer and for human robot
interaction identification, interaction and feedback roles were identified.
B. Dániel et al. Evaluation of Flexible Graphical User Interface for Intuitive Human Robot Interactions
150
The new approach represented by Service Oriented Robot Operation in human
robot interaction for industrial applications was examined by user evaluation.
Results show the decrease in task setup and completion time and the reduced
number of interactions proves a more intuitive way of communication between
man and machine.
Acknowledgement
This work was supported by BANOROB project, the project is funded by the
Royal Norwegian Ministry of Foreign Affairs. The authors also would like to
thank The Norwegian Research Council for supporting this research work through
The Industrial PhD program.
References
[1] J. Scholtz: Theory and evaluation of human robot interactions, System
Sciences, 36th Annual Hawaii International Conference on, pp. 10, 2003.
[2] B. Vaughan, J. G. Han, E. Gilmartin, N. Campbell: Designing and
Implementing a Platform for Collecting Multi-Modal Data of Human-
Robot Interaction, Acta Polytechnica Hungarica, Vol. 9, No. 1, pp. 7-17,
2012.
[3] S. Stadler, A. Weiss, N. Mirnig, M. Tscheligi: Anthropomorphism in the
factory - a paradigm change?, Human-Robot Interaction (HRI), 2013 8th
ACM/IEEE International Conference on, pp. 231-232, 2013.
[4] B. Solvang, G. Sziebig: On Industrial Robots and Cognitive
Infocommunication, Cognitive Infocommunications (CogInfoCom 2012),
3rd IEEE International Conference on, Paper 71, pp. 459-464, 2012.
[5] World Robotics - Industrial Robots 2011, Executive Summary. Online,
Available: http://www.roboned.nl/sites/roboned.nl/files/2011_Executive
_Summary.pdf, 2011.
[6] G. A. Boy (ed.): Human-centered automation, in The handbook of human-
machine interaction: a human-centered design approach. Ashgate
Publishing, pp. 3-8, 2011.
[7] Human-centred design for interactive systems, Ergonomics of human-
system interaction -- Part 210, ISO 9241-210:2010, International
Organization for Standardization, Stand., 2010.
[8] Z. Z. Bien, H.-E. Lee: Effective learning system techniques for human–
robot interaction in service environment, Knowledge-Based Systems, Vol.
20, No. 5, pp. 439-456, June 2007.
[9] A. V. Libin, E. V. Libin: Person-robot interactions from the
robotpsychologists' point of view: the robotic psychology and robotherapy
approach, Proceedings of the IEEE, Vol. 92, No. 11, pp. 1789-1803, Nov.
2004.
Acta Polytechnica Hungarica Vol. 11, No. 1, 2014
151
[10] P. Korondi, B. Solvang, P. Baranyi: Cognitive robotics and
telemanipulation, 15th International Conference on Electrical Drives and
Power Electronics, Dubrovnik, Croatia, pp. 1-8, 2009.
[11] FANUC iPendantTM brochure, FANUC Robotics America Corporation
(http://www.fanucrobotics.com), 2013.
[12] Yaskawa Motoman Robotics NX100 brochure, Yaskawa America, Inc.
(http://www.yaskawa.com), 2011.
[13] FlexPendant - Die neue Version des Programmierhandgertes bietet noch
mehr Bedienkomfort, (in German), ABB Asea Brown Boveri Ltd.
(www.abb.com), 2009.
[14] KUKA smartPad website, KUKA Roboter GmbH, (www.kuka-
robotics.com), 2013.
[15] Smart robot programming, the easy way of programming, Reis GmbH &
Co. KG Maschinenfabrik. (www.reisrobotics.de), 2013.
[16] P. Baranyi, B. Solvang, H. Hashimoto, P. Korondi: 3D Internet for
Cognitive Infocommunication, Hungarian Researchers on Computational
Intelligence and Informatics, 10th International Symposium of 2009 on,
pp. 229-243, 2009.
[17] P. Baranyi, A. Csapo: Definition and Synergies of Cognitive
Infocommunications, Acta Polytechnica Hungarica, Vol. 9, No. 1, pp. 67-
83, 2012.
[18] A. Csapo, P. Baranyi: A Unied Terminology for the Structure and
Semantics of CogInfoCom Channels, Acta Polytechnica Hungarica, Vol. 9,
No. 1, pp. 85-105, 2012.
[19] G. Sallai: The Cradle of the Cognitive Infocommunications, Acta
Polytechnica Hungarica, Vol. 9, No. 1, pp. 171-181, 2012.
[20] B. Daniel, P. Korondi, T. Thomessen: New Approach for Industrial Robot
Controller User Interface, Proceedings of the IECON 2013 - 39th Annual
Conference of the IEEE Industrial Electronics Society, pp. 7823-7828,
2013.
[21] Wonil Hwang and Gavriel Salvendy: Number of people required for
usability evaluation: the 10\±2 rule, Communication of the ACM, Vol. 53,
No. 5, pp. 130-133, May 2010.
... To address this, participants proposed alternative navigation options, such as incorporating a "click-to-move" function or utilizing the mouse scroll with a central point indicator. These suggestions emphasize the importance of offering users multiple ways to interact with the interface based on their preferences and familiarity with different navigation methods.This theme supports the findings byDaniel et al. (2014), where an industrial robot's more flexible graphical user interface promotes an intuitive human-robot interaction, reducing the duration of task execution and decreasing the number of required interactions. ...
... To address this, participants proposed alternative navigation options, such as incorporating a "click-to-move" function or utilizing the mouse scroll with a central point indicator. These suggestions emphasize the importance of offering users multiple ways to interact with the interface based on their preferences and familiarity with different navigation methods.This theme supports the findings byDaniel et al. (2014), where an industrial robot's more flexible graphical user interface promotes an intuitive human-robot interaction, reducing the duration of task execution and decreasing the number of required interactions. ...
Article
Full-text available
The nuclear energy sector can benefit from mobile robots for remote inspection and handling, reducing human exposure to radiation. Advances in cyber–physical systems have improved robotic platforms in this sector through digital twin (DT) technology. DTs enhance situational awareness for robot operators, crucial for safety in the nuclear energy sector, and their value is anticipated to increase with the growing complexity of cyber–physical systems. The primary motivation of this work is to rapidly develop and evaluate a robot fleet interface that accounts for these benefits in the context of nuclear environments. Here, we introduce a multimodal immersive DT platform for cyber–physical robot fleets based on the ROS-Unity 3D framework. The system design enables fleet monitoring and management by integrating building information models, mission parameters, robot sensor data, and multimodal user interaction through traditional and virtual reality interfaces. A modified heuristic evaluation approach, which accounts for the positive and negative aspects of the interface, was introduced to accelerate the iterative design process of our DT platform. Robot operators from leading nuclear research institutions (Sellafield Ltd. and the Japan Atomic Energy Agency) performed a simulated robot inspection mission while providing valuable insights into the design elements of the cyber–physical system. The three usability themes that emerged and inspired our design recommendations for future developers include increasing the interface's flexibility, considering each robot's individuality, and adapting the platform to expand sensor visualization capabilities.
... UX research encompasses all factors that affect user interaction during the experience with a system or product (Hassenzahl, 2010;Wright et al., 2008). Effective human-robot interaction requires intuitive user interfaces (Dániel et al., 2014) that allow easy communication between humans and robots. To achieve this, it is necessary to define the intended interactions and purpose of information exchange between humans and robots based on both parties' application scope and functions (Driewer et al., 2007). ...
Conference Paper
Full-text available
An increasing number of robots are being implemented in Industry 5.0, which aims to put the well-being of the operators at the centre. From a human-centred design perspective, it is crucial to assess the perception and acceptance of robots. Questionnaires are a commonly used instrument for user experience assessment, as they allow an efficient quantitative measurement from the user's perspective. In the absence of questionnaires that assess user experience in an industrial robotic environment, the Human-Robot Experience (HUROX) questionnaire has been developed. using an empirical approach for the item selection to ensure the practical relevance of the items. Through a psychometric analysis where 15 experts have evaluated each of the items according to importance, necessity, relevance and clarity, the questionnaire has been validated. Therefore, the final version of the questionnaire is composed of a total of 41 items in 9 constructs (perceived usefulness, perceived ease of use, perceived safety, comfort, learning, controllability, attitude, trust and satisfaction). Therefore, this paper presents the HUROX questionnaire that allows to measure the perception and acceptance of the users in a simple way, while covering a complete impression of the interaction.
... These suggestions emphasize the importance of offering users multiple ways to interact with the interface based on their preferences and familiarity with different navigation methods. This theme supports the findings by Daniel et al. (Daniel, Korondi, Sziebig, & Thomessen, 2014), where a more flexible graphical user interface for an industrial robot promotes an intuitive human-robot interaction, reducing the duration of task execution and decreasing the number of required interactions. ...
Preprint
Full-text available
The nuclear energy sector can benefit from mobile robots for remote inspection and handling, reducing human exposure to radiation. Advances in cyber-physical systems have improved robotic platforms in this sector through digital twin technology. Digital twins enhance situational awareness for robot operators, crucial for safety in the nuclear energy sector, and their value is anticipated to increase with the growing complexity of cyber-physical systems. We propose a multimodal immersive digital twin platform for cyber-physical robot fleets based on ROS-Unity3D. The system design integrates building information models, displays mission parameters, visualizes robot sensor data, and provides multi-modal user interaction through traditional and virtual reality interfaces. This enables real-time monitoring and management of fleets with heterogeneous robotic platforms. We performed a heuristic evaluation of our digital twin interface with robot operators from leading nuclear research institutions: Sellafield Ltd and Japan Atomic Energy Agency, performing a simulated robot inspection mission. The three usability themes that emerged from this evaluation and inspired our design recommendations include the flexibility of the interface, the individuality of each robot, and adaptation to expanding sensor visualization capabilities.
... A key aspect of collaboration is interaction and talking about interactions also means talking about interfaces. High-quality HRI requires intuitive user interfaces [36]. On the one hand, operators can give robots simple inputs without any distraction from their main tasks. ...
Article
Full-text available
Industry 4.0 has ushered in a new era of process automation, thus redefining the role of people and altering existing workplaces into unknown formats. The number of robots in the manufacturing industry has been steadily increasing for several decades and in recent years the number and variety of industries using robots have also increased. For robots to become allies in the day-to-day lives of operators, they need to provide positive and fit-for-purpose experiences through smooth and satisfying interactions. In this sense, user experience (UX) serves as the greatest link between persons and robots. Essential to the study of UX is its evaluation. Therefore, the aim of this study is to identify methodologies that evaluate the human–robot interaction (HRI) from a human-centred approach. A systematic literature review has been carried out, in which 24 articles have been identified. Among these, are 15 experimental studies, in addition to theoretical frameworks and tools. The review has provided insight into how evaluations are conducted in HRI. The results show the most evaluated factors and how they are measured considering different types of measurements: qualitative and quantitative, objective and subjective. Research gaps and future directions are correspondingly identified.
... Each service can also be executed in different types of hardware, operating systems, and languages, which are heterogeneous environments [36]. Therefore, if SOA is used, the key technologies of the Fourth Industrial Revolution can fulfill industrial requirements [37]. In other words, by leveraging SOA, IoT can support interconnection, and cloud services can provide stability and integration [38,39]. ...
Article
Full-text available
At automotive manufacturing sites, meeting the delivery schedule is difficult owing to the occurrence of unpredictable abnormal scenarios such as product defects and equipment failures. To overcome this, manufacturing technologies developed as part of the Fourth Industrial Revolution are employed to meet the delivery schedule set by the customer. We propose a digital twin (DT)–based cyber-physical system (CPS) that can predict whether a product can be manufactured as per the schedule requested by a customer at an automotive body production line where abnormal scenarios occur. We designed a product, process, plan, plant, and resource information model for automotive body production lines; the proposed DT employs this model. Unlike in previous research on DTs focusing on independent engineering application development, we designed and implemented a CPS combined with a DT and other components for a Web-based integrated manufacturing platform. To the best of our knowledge, this is the first time a DT-based CPS is implemented for abnormal scenarios involving automotive body production lines; the capability of the proposed system was verified via experiments. The experimental results indicate that the proposed system achieved an average prediction performance of 94.02% for the actual production plan. We confirmed that the DT-based CPS can be applied to automotive body production lines, and it provides an advanced solution to predict whether production is possible according to the production plan.
... FGUI was recommended over the others since operators could customize their interface according to their level or proficiency. Dániel et al. [3,4] later on further improved the FGUI using cognitive infocommunication and implementing Service Oriented Robot Operation concept. Results show significant improvement in terms of execution time and increase of usage of touch screen. ...
Conference Paper
Full-text available
In order to utilize the full potential of industrial robots its human operators need to gain a deep technical understanding of rather complex mechatronic devices. However, such knowledge does not come easily, its building blocks is university level mathematics, physics, mechanical and electrical engineering, control theory and computer science.
Article
Full-text available
This paper details a method of collecting video and audio recordings of people interacting with a simple robot interlocutor. The interaction is recorded via a number of cameras and microphones mounted on and around the robot. The system utilised a number of technologies to engage with interlocutors including OpenCV, Python, and Max MSP. Interactions over a three month period were collected at The Science Gallery in Trinity College Dublin. Visitors to the gallery freely engaged with the robot, with interactions on their behalf being spontaneous and non-scripted. The robot dialogue was a set pattern of utterances to engage interlocutors in a simple conversation. A large number of audio and video recordings were collected over a three month period.
Article
Full-text available
In this paper, we provide the finalized definition of Cognitive Infocommunications (CogInfoCom). Following the definition, we briefly describe the scope and goals of CogInfoCom, and discuss the common interests between CogInfoCom and the various research disciplines which contribute to this new field in a synergistic way.
Article
Full-text available
Due to its multidisciplinary origins, the elementary concepts and terminology of Cognitive Infocommunications (CogInfoCom) would ideally be derived - among others - from the fields of informatics, infocommunications and cognitive sciences. The terminology used by these fields in relation to CogInfoCom are disparate not only because the same concepts are often referred to using different terms, but also because in many cases the same terms can refer to different concepts. For this reason, we propose a set of concepts to help unify the CogInfoCom-related aspects in these fields, from the unique viewpoint necessary for the human-oriented analysis and synthesis of CogInfoCom channels. Examples are given to illustrate how the terminology presents itself in engineering applications.
Article
Full-text available
Information and communication technologies (ICT) and cognitive sciences (CS) are both pursued at the Budapest University of Technology and Economics (BME). The Department of the Telecommunications and Media Informatics (TMIT) is focusing on ICT and its related applications and convergence issues. After deploying the convergence of the telecommunications, information and media technologies based on the evolution and common use of the digital technology, the widening convergence process embraced CS, and resulted in the establishment of cognitive infocommunications as a new scientific discipline at the end of the last decade. Contents involved in the convergence process are expanded, the information value chains are merged and also expanded, and the convergent ICT applications are completed by cognitive features.
Book
Full-text available
The Handbook of Human-Machine Interaction features 20 original chapters and a conclusion focusing on human-machine interaction (HMI) from analysis, design and evaluation perspectives. It offers a comprehensive range of principles, methods, techniques and tools to provide the reader with a clear knowledge of the current academic and industry practice and debate that define the field. The text considers physical, cognitive, social and emotional aspects and is illustrated by key application domains such as aerospace, automotive, medicine and defence. Above all, this volume is designed as a research guide that will both inform readers on the basics of human-machine interaction from academic and industrial perspectives and also provide a view ahead at the means through which human-centered designers, including engineers and human factors specialists, will attempt to design and develop human-machine systems.
Article
Full-text available
The paper presents new research directions both for 3D internet and cognitive info-communication. It reviews basic concepts and definitions and gives some "historical" view that helps to understand current international trends and projects. Some examples of our experimental results are also presented.
Conference Paper
In this paper the main focus is on the user interfaces available for robot controllers and how they comply with the requirements of flexibility. An overview is given of the most important robot manufacturers teach pendant interfaces. The conclusion is that the process of creating customized user interfaces is still complicated and the how-to is unclear. We introduce the concept of flexible user interface based on service oriented robot cell operation. In order to offer more user friendly solutions industrial robot systems should use human-centred approach. Service oriented robot cell operation creates an abstract level for human robot interaction which is adapted to human cognition. A flexible graphical user interface is the final link between the robot cell and the operator, which offers a solution for issues faced in creation of customized user interfaces by its flexibility. One implementation called FlexGui is described in details and the most important features are demonstrated in a case study.
Article
Introduction Usability evaluation is essential to make sure that software products newly released are easy to use, efficient, and effective to reach goals, and satisfactory to users. For example, when a software company wants to develop and sell a new product, the company needs to evaluate usability of the new product before launching it at a market to avoid the possibility that the new product may contain usability problems, which span from cosmetic problems to severe functional problems. Three widely used methods for usability evaluation are Think Aloud (TA), Heuristic Evaluation (HE) and Cognitive Walkthrough (CW). TA method is commonly employed with a lab-based user testing, while there are variants of TA methods, including thinking out aloud at user's workplace instead of at labs. What we discuss here is the TA method that is combined with a lab-based user testing, in which test users use products while simultaneously and continuously thinking out aloud, and experimenters record users' behaviors and verbal protocols in the laboratory. HE is a usability inspection method, in which a small number of evaluators find usability problems in a user interface design by examining an interface and judging its compliance with well-known usability principles, called heuristics. CW is a theory-based method, in which evaluators evaluate every step necessary to perform a scenario-based task, and look for usability problems that would interfere with learning by exploration. These three methods have their own advantages and disadvantages. For instance, TA method provides good qualitative data from a small number of test users, but laboratory environment may influence test user's behaviors. HE is a cheap, fast and easy-to-use method, while it often finds too specific and low-priority usability problems, including even not real problems. CW helps find mismatches between users' and designers' conceptualization of a task, but it needs extensive knowledge of cognitive psychology and technical details to apply. However, even though these advantages and disadvantages show overall characteristics of three major usability evaluation methods, we cannot compare them quantitatively and see their efficiency clearly. Because one of reasons why so-called discounted methods, such as HE and CW, were developed is to save costs of usability evaluation, cost-related criteria for comparing usability evaluation are meaningful to usability practitioners as well as usability researchers. One of the most disputable issues related to cost of usability evaluation is sample size. That is, how many users or evaluators are needed to achieve a targeted usability evaluation performance, for example, 80% of overall discovery rate? The sample size of usability evaluation is known to depend on an estimate of problem discovery rate across participants. The overall discovery rate is a common quantitative measure that is used to show the effectiveness of a specific usability evaluation method in most of usability evaluation studies. It is also called overall detection rate or thoroughness measure, which is the ratio of 'the sum of unique usability problems detected by all experiment participants' against 'the number of usability problems that exist in the evaluated systems', ranging between 0 and 1. The overall discovery rates were reported more than any other criterion measure in the usability evaluation experiments and also a key component for projecting required sample size for usability evaluation study. Thus, how many test users or evaluators participate in the usability evaluation is a critical issue, considering its cost-effectiveness.
Article
HRI (Human-Robot Interaction) is often frequent and intense in assistive service environment and it is known that realizing human- friendly interaction is a very difficult task because of human presence as a subsystem of the interaction process. After briefly discussing typical HRI models and characteristics of human, we point out that learning aspect would play an important role for designing the inter- action process of the human-in-the loop system. We then show that the soft computing toolbox approach, especially with fuzzy set-based learning techniques, can be effectively adopted for modeling human behavior patterns as well as for processing human bio-signals includ- ing facial expressions, hand/ body gestures, EMG and so forth. Two project works are briefly described to illustrate how the fuzzy logic- based learning techniques and the soft computing toolbox approach are successfully applied for human-friendly HRI systems. Next, we observe that probabilistic fuzzy rules can handle inconsistent data patterns originated from human, and show that combination of fuzzy logic, fuzzy clustering, and probabilistic reasoning in a single frame leads to an algorithm of iterative fuzzy clustering with supervision. Further, we discuss a possibility of using the algorithm for inductively constructing probabilistic fuzzy rule base in a learning system of a smart home. Finally, we propose a life-long learning system architecture for the HRI type of human-in-the-loop systems. � 2007 Elsevier B.V. All rights reserved.