Document Type
Dissertation
Date of Award
Summer 8-31-2007
Degree Name
Doctor of Philosophy in Information Systems - (Ph.D.)
Department
Information Systems
First Advisor
Marilyn M. Tremaine
Second Advisor
Murray Turoff
Third Advisor
Quentin Jones
Fourth Advisor
Brian Whitworth
Fifth Advisor
Ephraim P. Glinert
Abstract
Although multimodal computer input is believed to have advantages over unimodal input, little has been done to understand how to design a multimodal input mechanism to facilitate visually impaired users' information access.
This research investigates sighted and visually impaired users' multimodal interaction choices when given an interaction grammar that supports speech and touch input modalities. It investigates whether task type, working memory load, or prevalence of errors in a given modality impact a user's choice. Theories in human memory and attention are used to explain the users' speech and touch input coordination.
Among the abundant findings from this research, the following are the most important in guiding system design: (1) Multimodal input is likely to be used when it is available. (2) Users select input modalities based on the type of task undertaken. Users prefer touch input for navigation operations, but speech input for non-navigation operations. (3) When errors occur, users prefer to stay in the failing modality, instead of switching to another modality for error correction. (4) Despite the common multimodal usage patterns, there is still a high degree of individual differences in modality choices.
Additional findings include: (I) Modality switching becomes more prevalent when lower working memory and attentional resources are required for the performance of other concurrent tasks. (2) Higher error rates increases modality switching but only under duress. (3) Training order affects modality usage. Teaching a modality first versus second increases the use of this modality in users' task performance.
In addition to discovering multimodal interaction patterns above, this research contributes to the field of human computer interaction design by: (1) presenting a design of an eyes-free multimodal information browser, (2) presenting a Wizard of Oz method for working with visually impaired users in order to observe their multimodal interaction.
The overall contribution of this work is that of one of the early investigations into how speech and touch might be combined into a non-visual multimodal system that can effectively be used for eyes-free tasks.
Recommended Citation
Chen, Xiaoyu, "Designing multimodal interaction for the visually impaired" (2007). Dissertations. 827.
https://digitalcommons.njit.edu/dissertations/827