Carnegie Mellon University, Interactive Systems Laboratory
Universität Karlsruhe, Fachbereich Informatik
To achieve this goal, machines need interfaces that are perceptually aware and are able to use and interpret the intent behind a multiplicity of communicative signals, depending on the task, the situation, the devices used, and the individual preference of the human users. At our lab, we have developed interfaces that combine alternate modalities (speech, gesture, handwriting, etc.) flexibly and simplify human-machine interaction. Different forms of deployment had to be considered including interactive stationary environments, as well as wearable systems for the mobile user.
Our exploration of effective human-machine interaction has shown, however, that opportunities for better machine interfaces extend far beyond simple human-computer queries. Fleximodal and minimally intrusive use of technology must include and combine human-machine interaction, human-machine-human interaction and computer enhanced human-human interaction. In my talk, I will present examples of each of these technologies: human-machine systems in form of multimodal user interaction by voice, gesture and handwriting; human-computer-human interaction in form of speech-to-speech and sign translation systems. I will conclude by showing what might happen when human-machine, and human-human interaction and collaboration intermingle in context aware meeting rooms.
![]() |
Erstellt von: Anke Weinberger (2001-06-26). Wartung durch: Anke Weinberger (2001-06-26). |