Figure 1 - uploaded by Jean Vanderdonckt
Content may be subject to copyright.
Design parameters for graphical user interfaces. 

Design parameters for graphical user interfaces. 

Source publication
Chapter
Full-text available
The capabilities of multimodal applications running on the web are well de-lineated since they are mainly constrained by what their underlying standard mark up language offers, as opposed to hand-made multimodal applications. As the experience in developing such multimodal web applications is growing, the need arises to identify and define major de...

Contexts in source publication

Context 1
... options for graphical user interface are described according to the five parameters specified in Fig. 1. Sub-task presentation parameter specifies the appearance of each sub- task in the final user interface. The possible values are illustrated in Fig. 2. The presentation of each sub-task can be either separated or combined. Sepa- rated presentation identifies the situation when each sub-task is represented in different containers (e.g., ...
Context 2
... By using MultiXML, we want to address a reduced set of concerns by limiting the amount of design options, thus making the design space more manageable or tractable [7]. Our support involves a trans- formational approach detailed in [15]. The method consists of a forward en- gineering process composed of four transformational steps illustrated in Fig. 10. To ensure these steps, transformations are encoded as graph transforma- tions performed on UsiXML models expressed in their graph equivalent. All design options correspond to a class in UsiXML meta-model (e.g., the tabbed dialog box value corresponds to tabbedDialogBox class, the feedback parameter corresponds to the vocalFeedback ...
Context 3
... five software modules of MultiXML tool are ( Fig. 10): IdealXML tool, TransformiXML tool, GrafiXML tool (automatically generates graphical UIs (XHTML) from the UsiXML Concrete UI Model), CFB (Communica- tion Flow Builder) Generator tool (generates XML code corresponding to the Communication Flow Builder file format by applying XSL Transforma- tions over the Concrete Vocal UI specification ...
Context 4
... a set of personal information, such as name and card details; then the system checks the validity of the card and finally, the user confirms the payment). Based on the design options detailed in Section 3.1 we will illustrate, us- ing different graph transformation rules, their applicability for the car rental system. For final graphical UI (Fig. 14) we consider the parameter sub-task presentation and sub-task navigation. Fig. 11 illustrates the transformation rule applied in order to generate a UI where the presentation of the sub-tasks is separated in three windows. For each top-level abstract container, a graphical container of type window is created. The navigation between the ...
Context 5
... checks the validity of the card and finally, the user confirms the payment). Based on the design options detailed in Section 3.1 we will illustrate, us- ing different graph transformation rules, their applicability for the car rental system. For final graphical UI (Fig. 14) we consider the parameter sub-task presentation and sub-task navigation. Fig. 11 illustrates the transformation rule applied in order to generate a UI where the presentation of the sub-tasks is separated in three windows. For each top-level abstract container, a graphical container of type window is created. The navigation between the windows is of type sequential and is concre- tized in a global placement of the ...
Context 6
... concre- tized in a global placement of the (NEXT,PREV) buttons identified on each window. The navigation is assured only by these two logically grouped ob- jects, so the value of the cardinality parameter is simple. The transformation rule that endows the NEXT button (similar for PREV button) with activate and deactivate features is presented in Fig. 12. For a multimodal UI (Fig. 15), a transformation rule generates a multimodal text input that accepts the name of the credit card holder (Fig. 13). We consider the following design options values: prompt (graphical and vocal), input (graphical or vocal), feedback (graphical and vocal), guidance for input (iconic with microphone and key- ...
Context 7
... of the (NEXT,PREV) buttons identified on each window. The navigation is assured only by these two logically grouped ob- jects, so the value of the cardinality parameter is simple. The transformation rule that endows the NEXT button (similar for PREV button) with activate and deactivate features is presented in Fig. 12. For a multimodal UI (Fig. 15), a transformation rule generates a multimodal text input that accepts the name of the credit card holder (Fig. 13). We consider the following design options values: prompt (graphical and vocal), input (graphical or vocal), feedback (graphical and vocal), guidance for input (iconic with microphone and key- board icons), or guidance for ...
Context 8
... grouped ob- jects, so the value of the cardinality parameter is simple. The transformation rule that endows the NEXT button (similar for PREV button) with activate and deactivate features is presented in Fig. 12. For a multimodal UI (Fig. 15), a transformation rule generates a multimodal text input that accepts the name of the credit card holder (Fig. 13). We consider the following design options values: prompt (graphical and vocal), input (graphical or vocal), feedback (graphical and vocal), guidance for input (iconic with microphone and key- board icons), or guidance for feedback (iconic with speaker icon). Figure 13. Generation of multimodal text input. ...
Context 9
... consider the following design options values: prompt (graphical and vocal), input (graphical or vocal), feedback (graphical and vocal), guidance for input (iconic with microphone and key- board icons), or guidance for feedback (iconic with speaker icon). Figure 13. Generation of multimodal text input. ...

Similar publications

Conference Paper
Full-text available
Graph representation of graphical documents often suffers from noise such as spurious nodes and edges, and their discontinuity. In general these errors occur during the low-level image processing viz. binarization, skeletonization, vectorization etc. Hierarchical graph representation is a nice and efficient way to solve this kind of problem by hier...
Article
Full-text available
The modelling and the monitoring of Hybrid Dynamic Systems (HDS) require simulation tools supporting the representation of continuous subsystems as well as the discrete event ones. The graphical approaches for modelling are extremely appreciable for models which will be used for the diagnosis and the supervision. Indeed, tools such as the Petri Net...
Conference Paper
Full-text available
We introduce Single-Pushout Rewriting for arbitrary partial algebras. Thus, we give up the usual restriction to graph structures, which are algebraic categories with unary operators only. By this generalisation, we obtain an integrated and straightforward treatment of graphical structures (objects) and attributes (data). We lose co-completeness of...
Article
Full-text available
Although the idea of using a modulated laser signal to measure the speed of light is not new, most methods found in international literature are still expensive, as a result of either the instruments or the circuits used. In the present approach, we provide an alternative that requires equipment that most universities own for their undergraduate pr...
Conference Paper
Full-text available
Cet article est consacré à la modélisation systémique d'un système de propulsion de type Moteur-Roue pour le véhicule électrique en utilisant l'outil graphique : le Bond Graph. L'actionneur de la roue est de type synchrone à aimants permanents, commandé par la technique Modulation de Largeur d'Impulsion (MLI) de type sinus-triangle. Le modèle de l'...

Citations

... Besides, it is possible to combine modalities in several ways, but in different levels of granularity. Some other projects based on XML were proposed in this area like in [12,15,16]. ...
Conference Paper
In this paper we describe an approach to facilitate the design of Web multimodal interfaces aiming at improving the user experience and the user interface usability using speech recognition together with the usual graphical user interfaces. We present a proposal for usability evaluation based on the heuristic evaluation, which considers the multimodal principles identified during a case study. As a result of using the proposed approach in the case study and from literature review, we report our considerations for the design, development and improvement of the Web multimodal interfaces.
... Since speech is the most basic and efficient way of communication, multimodal interfaces can further expand the convenience and accessibility of services as well as lower the complexity of traditional unimodal GUI based interfaces [1][2]. The disadvantage of using multimodality within web applications lies in the fact that the capabilities of multimodal web interfaces are limited to the capabilities of their underlying standard markup language [3]. Most advances in multimodal interfaces for web based services are nowadays driven by ubiquitous computing environments (pervasive computing) [4][5]. ...
Article
Full-text available
Most users in either desktop or ubiquitous environments access Web applications from Web browser interfaces. Majority of standard Web applications are still based on GUIs and usually support user-machine interaction using traditional human-machine interfaces (e.g. mouse, keyboard). In order to make access to the Web content more natural and to improve user experience, advanced user interfaces, enabling additional modalities in human machine interaction, must be developed and provided. This paper presents a concept and proposes a multimodal web platform (MWP) used for flexible integration of advanced multimodal technologies into web applications. The multimodal web platform suggests a process of integration of web applications into multimodal framework, instead of traditional integration of multimodal interfaces into web application, and formation of platform independent multimodal interface. The proposed MWP platform was developed in order to provide multimodal interface for interactive kiosk setup and multimodal e-commerce application. The MWP platform is based on Apache Tomcat web server, Java Web technology as middleware environment and complex distributive infrastructure that takes care for providing multimodal services.
... The case study concerns a simple car rental system allowing users to choose a car, book and pay a reservation and print a receipt. The detailed case study can be found in [16] (pp. 140-164). ...
Conference Paper
Full-text available
This paper discusses multi-level dialog specifications for user inter- faces of multi-target interactive systems and it proposes a step-wise method that combines a transformational approach for model-to-model derivation and an in- teractive editing of dialog models for tailoring the derived models. This method provides a synthesis of existing solutions for dialog modeling using a XML- based User Interface Description Language, UsiXML, along with State- WebCharts notation for expressing the dialog at a high level of abstraction. Our aim is to push forward the design and reuse of dialog specifications throughout several levels of abstraction ranging from task and domain models until the fi- nal user interface thanks to a mechanism based on cascading style sheets. In this way, it is expected that the dialog properties are not only inherited from one level to another but also are made much more reusable than in the past.
... XISL [37]), and explicit design options for multimodal dialog (e.g., CARE properties [29], task-based design of multimodal applications [19]). MultimodaliXML design options for MWUIs [38] are structured according to three types of pure modalities (graphical, vocal, tactile) and the combination of them. These design options provide designers with explicit guidance for their future UI, and allow them to explore design alternatives. ...
Article
Full-text available
Designing complex interactive systems requires the collaboration of actors with very different background. As a result, several languages and tools are used in a single project with no hope for interoperability. In this article, we examine whether a universal language is a realistic approach to UI specification by looking for answers into the domain of Linguistics while finding analogies in software engineering. Then, we explore one particular avenue from main-stream software engineering: that of Model Driven Engineering where the notion of transformation is key to the definition of bridges between languages and tools. Building upon these two analyses, we then show how model-driven engineering can be successfully exploited in the development and execution of plastic multimodal UIs illustrated with a variety of complementary tools.
... La question du niveau de fidélité se pose ici en d'autres termes et n'a pas encore fait l'objet de réflexion. De même, MultiXML [29] permet de prototyper relativement rapidement une interface multimodale (graphique, vocale, tactile ou combinant celles-ci) pour le web, mais la question du niveau de fidélité n'est pas encore résolue non plus. Enfin, Topiary [18] exploite l'expérience acquise dans le développement d'applications sensibles au contexte pour proposer des représentations de natures différentes utiles au prototypage de telles applications. ...
Article
Full-text available
RESUME Le besoins en prototypage des interfaces homme-ma-chine graphiques varient en fonction du moment où ils interviennent dans le cycle de vie de développement de l'application interactive. Dans ce but, la notion de niveau de fidélité du prototypage est introduite, définie et illus-trée de façon à être supportée par des outils adéquats. Pour chaque niveau de fidélité, les forces et les faibles-ses de ces outils sont développées de façon à identifier quel outil peut le mieux convenir en fonction de la phase de développement. Les niveaux de fidélité basse, modé-rée et élevée sont respectivement supportés par des outils idoines développés à cet égard : respectivement, un outil d'esquisse d'interface baptisé SketchiXML, un outil de dessin vectoriel, baptisé VisiXML, et un éditeur d'inter-face avancé, baptisé GrafiXML. Ces outils logiciels sont interopérables par l'échange de spécifications d'une in-terface homme-machine, rédigées dans le langage Usi-XML (USer Interface eXtensible Markup Language), un langage de description d'interface homme-machine basé sur XML. MOTS CLES : Approche dirigée par les modèles, con-ception assistée par ordinateur, langage de description d'interface, niveau de fidélité, outil d'esquisse, prototy-page, USer Interface eXtensible Markup Language. ABSTRACT The requirements for prototyping graphical user inter-faces vary depending on the moment they are considered during the development life cycle of the interactive ap-plication. For this purpose, the notion of 'level of fideli-ty' is introduced, defined, and illustrated so as to be sup-ported by appropriate tools. For each level of fidelity, the strengths and the weaknesses of these supporting tools are discussed in order to identify which one is es-timated the most appropriate for each step of the deve-lopment life cycle. The levels of fidelity, respectively low, moderate, and high are supported by individual softwares which have been developed for this purpose: respectively a sketching tool, named SketchiXML, a vectorial drawing tool, named VisiXML, and an advan-ced interface editor, named GrafiXML. These tools are interoperable by exchanging the specifications of a user interface written in the UsiXML language (USer Inter-face eXtensible Markup Language), a XML-compliant user interface description language.
Conference Paper
Automation in the course of user-interface (UI) development has the potential to save resources and time. For graphical user interfaces, quite some research has been performed on their automated generation. While the results are still not in wide-spread use, at least the problems are well understood meanwhile. In contrast, automated generation of multimodal UIs is still in its infancy. We address this problem by proposing a tool-supported process for generating multimodal UIs for dialogue-based interactive systems. For its concrete enactment, we provide tool support for generating a runtime configuration and glue code, respectively. In a nutshell, our approach generates multimodal dialogue-based UIs semi-automatically.
Article
In ubiquitous computing, context of use (user, platform, environment) may be multiple, dynamic and unpredictable. As a result, it is necessary to make it possible for the system to reason about its own design at run-time. My hypothesis is that transferring human know-how in HCI into the system may solve the problem. As models have been recognized as powerful for reasoning in HCI for long, I promote a Model Driven Engineering Approach. A User Interface (UI) is a graph of models alive at runtime. Models are compliant to explicit metamodels. Adapting, the UI means transforming the graph. When properties are preserved then the UI is said to be plastic.
Book
Full-text available
This volume brings together the advanced research results obtained by the European COST Action 2102 "Cross Modal Analysis of Verbal and Nonverbal Communication", primarily discussed at the PINK SSPnet-COST2102 International Conference on Analysis of Verbal and Nonverbal Communication and Enactment: The Processing Issues, held in Budapest, Hungary, in September 2010. The 40 papers presented were carefully reviewed and selected for inclusion in the book. The volume is arranged into two scientific sections. The first section, Multimodal Signals: Analysis, Processing and Computational Issues, deals with conjectural and processing issues of defining models, algorithms, and heuristic strategies for data analysis, coordination of the data flow and optimal encoding of multi-channel verbal and nonverbal features. The second section, Verbal and Nonverbal Social Signals, presents original studies that provide theoretical and practical solutions to the modelling of timing synchronization between linguistic and paralinguistic expressions, actions, body movements, activities in human interaction and on their assistance for an effective human-machine interactions.
Conference Paper
Full-text available
Web applications are a widely-spread and a widely-used concept for presenting information. Their underlying architecture and standards, in many cases, limit their presentation/control capabilities of showing pre-recorded audio/video sequences. Highly-dynamic text content, for instance, can only be displayed in its native from (as part of HTML content). This paper provides concepts and answers that enable the transformation of dynamic web-based content into multimodal sequences generated by different multimodal services. Based on the encapsulation of the content into a multimodal shell, any text-based data can dynamically and at interactive speeds be transformed into multimodal visually-synthesized speech. Techniques for the integration of multimodal input (e.g. visioning and speech recognition) are also included. The concept of multimodality relies on mashup approaches rather than traditional integration. It can, therefore, extended any type of web-based solution transparently with no major changes to either the multimodal services or the enhanced web-application.