The future Max

Ipke Wachsmuth

Technische Fakultät, AG Wissensbasierte Systeme (KI)
Universität Bielefeld

Will it be possible one day to have machines that live up to human communication abilities, in that they understand what we want them to do and can take on the role of social partners? A technical aim of research in artificial intelligence is the advancement of human-machine interaction by way of systems which use multiple modalities to make communication with the human more intuitive. With the virtual humanoid MAX under development in the Bielefeld AI lab we explore to what extent embodied communication can be realized by an artificial agent situated in virtual reality. Clearly such an agent does not have a body in the physical sense, but it can be equipped with verbal conversational abilities, and employ its virtual body to exhibit non-verbal behaviors. Equipped with a modulated synthetic voice and an articulated body and face, Max is able to speak and gesture, and to mimic emotions. By means of microphones and tracker systems, Max can also "hear" and "see" and is able to process spoken instructions and gestures.

Beyond technical achievement our research is led by the expectation that building and testing an artificial communicator will help to reach a more profound understanding of human communication. There are many questions about "the future Max" which are current research challenges to our team. For instance, can Max learn to imitate iconic gestures demonstrated by a human partner? Iconic gestural movements are assumed to derive from imagistic representations in working memory, which are transformed into patterns of control signals executed by motor systems. Could an artificial agent construct a ?mental image? of shape from an observed iconic gesture and reenact -- or reexpress -- it by way of iconic gestures? Another research challenge is emotion. Could an artificial agent express emotions related to internal parameters that are themselves influenced by external and internal events? Will the future Max be able to coordinate actions with a partner, have some kind of physical and mental self-awareness, have autobiographic memory, empathy, and a fuller Theory of Mind?

Sample publications

Becker, C., Kopp, S., & Wachsmuth, I. (2004). Simulating the emotion dynamics of a multi,modal conver,sational agent. In E. Andri et al. (Eds.): Affective Dialogue Systems (pp. 154-165). Berlin: Springer (LNAI 3068).

Kopp, S., & Wachsmuth, I. (2004). Synthesizing multimodal utterances for conversational agents. Journal of Computer Animation and Virtual Worlds, 15, 39-52.

Kopp, S., Sowa, T., & Wachsmuth, I. (2004). Imitation games with an artificial agent: from mimic,king to understanding shape-related iconic gestures. In A. Camurri & G. Volpe (Eds.), Gesture-based Communi,cation in Human-Computer Interaction (pp. 436-447). Berlin: Springer (LNAI 2915).

Becker, C., Prendinger, H., Ishizuka, M., & Wachsmuth, I. (2005). Evaluating affective feedback of the 3D agent Max in a competitive cards game. Accepted as full paper for ACII2005, First Internat. Conference on Affective Computing & Intelligent Interaction (Beijing, Oct. 22-24, 2005).

sfb-logo Zur Startseite Erstellt von: Anke Weinberger (2005-09-22).
Wartung durch: Anke Weinberger (2005-10-19).