[go: up one dir, main page]

skip to main content
10.1145/958432.958486acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
Article

Algorithms for controlling cooperation between output modalities in 2D embodied conversational agents

Published: 05 November 2003 Publication History

Abstract

Recent advances in the specification of the multimodal behavior of Embodied Conversational Agents (ECA) have proposed a direct and deterministic one-step mapping from high-level specifications of dialog state or agent emotion onto low-level specifications of the multimodal behavior to be displayed by the agent (e.g. facial expression, gestures, vocal utterance). The difference of abstraction between these two levels of specification makes difficult the definition of such a complex mapping. In this paper we propose an intermediate level of specification based on combinations between modalities (e.g. redundancy, complementarity). We explain how such intermediate level specifications can be described using XML in the case of deictic expressions. We define algorithms for parsing such descriptions and generating the corresponding multimodal behavior of 2D cartoon-like conversational agents. Some random selection has been introduced in these algorithms in order to induce some "natural variations" in the agent's behavior. We conclude on the usefulness of this approach for the design of ECA.

References

[1]
Arafa, Y., Kamyab, K., Mamdani, E., Kshirsagar, S., Magnenat-Thalmann, N., Guye-Vuillème, A., and Thalmann, D. Two approaches to Scripting Character Animation in {8}.
[2]
Buisine, S., Abrilian, S., and Martin, J.-C. Evaluation of individual multimodal behavior of 2D embodied agents in presentation tasks in proc. of Workshop "Embodied conversational characters as individuals", Marriot, A., Pelachaud, C., Ruttkay, Z. (Eds), 2nd Int. Joint Conf. on Autonomous Agents & Multiagent Systems (AAMAS'03), Melbourne, Australia, 2003.
[3]
De Carolis, B., Carofiglio, V., Bilvi, M., and Pelachaud, C. APML, a Markup Language for Believable Behavior Generation in {8}.
[4]
Marriot, A. and Stallo, J. VHML - Uncertainties and problems. A discussion in {8}.
[5]
Pelachaud, C. and Poggi, I. Subtleties of facial expressions in embodied agents. The Journal of Visualization and Computer Animation. Special Issue: Graphical Autonomous Virtual Humans. Issue Edited by Daniel Ballin, Jeff Rickel, Daniel Thalmann., vol. 13 (5), pp. 301--312, 2002.
[6]
Piwek, P., Krenn, B., Schröder, M., Grice, M., Baumann, S., and Pirker, H. RRL: A Rich Representation Language for the Description of Agent Behaviour in NECA in {8}.
[7]
Prendinger, H., Descamps, S., and Ishizuka, M. Scripting affective communication with life-like characters in web-based interaction systems. Applied Artificial Intelligence, vol. 16 519--553, 2002.
[8]
Proc. of Workshop on "Embodied conversational agents - let's specify and evaluate them!", 1st Int. Joint Conf. on "Autonomous Agents & Multi-Agent Systems" (AAMAS'02), Bologna, Italy, 2002

Cited By

View all
  • (2006)A framework for the intelligent multimodal presentation of informationSignal Processing10.1016/j.sigpro.2006.02.04186:12(3696-3713)Online publication date: 1-Dec-2006
  • (2006)Architecture of a framework for generic assisting conversational agentsProceedings of the 6th international conference on Intelligent Virtual Agents10.1007/11821830_12(145-156)Online publication date: 21-Aug-2006

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '03: Proceedings of the 5th international conference on Multimodal interfaces
November 2003
318 pages
ISBN:1581136218
DOI:10.1145/958432
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 November 2003

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. embodied conversational agent
  2. multimodal output
  3. redundancy
  4. specification

Qualifiers

  • Article

Conference

ICMI-PUI03
Sponsor:
ICMI-PUI03: International Conference on Multimodal User Interfaces
November 5 - 7, 2003
British Columbia, Vancouver, Canada

Acceptance Rates

ICMI '03 Paper Acceptance Rate 45 of 130 submissions, 35%;
Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)6
  • Downloads (Last 6 weeks)1
Reflects downloads up to 14 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2006)A framework for the intelligent multimodal presentation of informationSignal Processing10.1016/j.sigpro.2006.02.04186:12(3696-3713)Online publication date: 1-Dec-2006
  • (2006)Architecture of a framework for generic assisting conversational agentsProceedings of the 6th international conference on Intelligent Virtual Agents10.1007/11821830_12(145-156)Online publication date: 21-Aug-2006

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media