Figure 1 - uploaded by Wenzhe Shi
Content may be subject to copyright.
The internal buffer structure of a message consumer.

The internal buffer structure of a message consumer.

Source publication
Conference Paper
Full-text available
In this paper, we present a novel software tool designed and implemented to simplify the development process of Multimodal Human-Computer Interaction (MHCI) systems. This tool, which is called the HCI^2 Workbench, exploits a Publish / Subscribe (P/S) architecture to facilitate efficient and reliable inter-module data communication and runtime syste...

Context in source publication

Context 1
... In such case, the unprocessed messages will be queued in the module's internal buffer and cause it to expand indefinitely. This waste of memory is not only unnecessary but also harmful to the system's stability. To solve this problem, we insert a secondary buffer between each message consumer's local inbox and its output buffer, as illustrated in Fig. 1. The secondary buffer is organized into an automatically growing queue with a fixed maximum capacity. If the module's message retrieval rate cannot catch up with the input rate, the secondary buffer will be eventually piled full and consequently block the message consumer's internal message delivery thread. Hence, the message source ...

Similar publications

Article
Full-text available
The scope of this work is to present a multidisciplinary study in order to propose a tool called DIMZAL. DIMZAL forecasts fuelbreak safety zone sizes. To evaluate a safety zone and to prevent injury, the Acceptable Safety Distance (ASD) between the fire and firefighters is required. This distance is usually set thanks to a general rule-of-thumb: it...

Citations

... Different application needs have prompted different architecture design and focus for each system. From traditional multi-agent distributed architectures [16], [19] recent frameworks are developing to unified platforms that provide templates to build the modules themselves [20], [21]. ...
... One very well rounded multimodal framework is HCI2 [19] which is based on a Publisher/Subscriber model on top of a message system like ActiveMQ. Another broadly used framework is the Social Signal Interpretation framework (SSI) [30] based on pipeline architecture. ...
Article
During face-to-face interactions, people naturally integrate nonverbal behaviors such as facial expressions and body postures as part of the conversation to infer the communicative intent or emotional state of their interlocutor. The interpretation of these nonverbal behaviors will often be contextualized by interactional cues such as the previous spoken question, the general discussion topic or the physical environment. A critical step in creating computers able to understand or participate in this type of social face-to-face interactions is to develop a computational platform to synchronously recognize nonverbal behaviors as part of the interactional context. In this platform, information for the acoustic and visual modalities should be carefully synchronized and rapidly processed. At the same time, contextual and interactional cues should be remembered and integrated to better interpret nonverbal (and verbal) behaviors. In this article, we introduce a real-time computational framework, MultiSense, which offers flexible and efficient synchronization approaches for context-based nonverbal behavior analysis. MultiSense is designed to utilize interactional cues from both interlocutors (e.g., from the computer and the human participant) and integrate this contextual information when interpreting nonverbal behaviors. MultiSense can also assimilate behaviors over a full interaction and summarize the observed affective states of the user. We demonstrate the capabilities of the new framework with a concrete use case from the mental health domain where MultiSense is used as part of a decision support tool to assess indicators of psychological distress such as depression and post-traumatic stress disorder (PTSD). In this scenario, MultiSense not only infers psychological distress indicators from nonverbal behaviors but also broadcasts the user state in real-time to a virtual agent (i.e., a digital interviewer) designed to conduct semi-structured interviews with human participants. Our experiments show the added value of our multimodal synchronization approaches and also demonstrate the importance of MultiSense contextual interpretation when inferring distress indicators.
... [31] introduces a novel, real-time approach for smile detection. Also, [32] introduces a system that facilitates rapid development of interactive systems and [33] presents many examples of work on control of virtual characters in theatrical performances. Increasing research and development during recent years has explored Human-Robot interactions especially for the challenge of service robots. ...
Article
It has already been realized by the scientific and technical community that a new form of technology is going to lead the future technological developments. This technology will be more human-centric and will be more and more “hidden” within everyday-life objects. It will be smarter, personalized, pervasive and ubiquitous. This technology includes what is called Ambient Intelligence (AmI). In this paper, we identify the main aspects of AmI through a review of the recent developments that have been achieved in these aspects of AmI and Ambient Intelligence Environments (AmIEs), as well as point out the problems yet to be solved and the visions of the future.less
Article
Trace analysis graphical user environments have to provide different views on trace data, in order to be effective in helping the comprehension of the traced application behavior. In this article we propose an open and modular software architecture, the FrameSoC workbench1, which defines clear principles for view engineering and for view consistency management. The FrameSoC workbench has been successfully applied in real trace analysis use cases.
Article
In the present study, we characterized the transcriptional regulatory region (KF038144) controlling the expression of a constitutive FAD2 in Brassica napus. There are multiple FAD2 gene copies in B. napus genome. The FAD2 gene characterized and analyzed in the study is located on chromosome A5 and was designated as BnFAD2A5-1. BnFAD2A5-1 harbors an intron (1192bp) within its 5'-untranslated region (5'-UTR). This intron demonstrated promoter activity. Deletion analysis of the BnFAD2A5-1 promoter and intron through the β-glucuronidase (GUS) reporter system revealed that the -220 to -1bp is the minimum promoter region, while -220 to -110bp and +34 to +285bp are two important regions conferring high-levels of transcription. BnFAD2 transcripts were induced by light, low temperature, and abscisic acid (ABA). These observations demonstrated that not only the promoter but also the intron are involved in controlling the expression of the BnFAD2A5-1 gene. The intron-mediated regulation is an essential aspect of the gene expression regulation.
Conference Paper
Full-text available
In this workshop of the 8th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2008), the emphasis is on research on facial and bodily expressions for the control and adaptation of games. We distinguish between two forms of expressions, depending on whether the user has the initiative and consciously uses his or her movements and expressions to control the interface, or whether the application takes the initiative to adapt itself to the affective state of the user as it can be interpreted from the user’s expressive behavior. Hence, we look at: -Voluntary control: The user consciously produces facial expressions, head movements or body gestures to control a game. This includes commands that allow navigation in the game environment or that allow movements of avatars or changes in their appearances (e.g. showing similar facial expressions on the avatar’s face, transforming body gestures to emotion-related or to emotion-guided activities). Since the expressions and movements are made consciously, they do not necessarily reflect the (affective) state of the gamer. - Involuntary control: The game environment detects, and gives an interpretation to the gamer’s spontaneous facial expression and body pose and uses it to adapt the game to the supposed affective state of the gamer. This adaptation can affect the appearance of the game environment, the interaction modalities, the experience and engagement, the narrative and the strategy that is followed by the game or the game actors. The workshop shows the broad range of applications that address the topic.