Patterns of multimodal input usage in non-visual information navigation
Document Type
Conference Proceeding
Publication Date
10-17-2006
Abstract
Multimodal input is known to be advantageous for graphical user interfaces, but its benefits for non-visual interaction are unknown. To explore this issue, an exploratory study was conducted with fourteen sighted subjects on a system that allows speech input and hand input on a touchpad. Findings include: (1) Users chose between these two input modalities based on the types of operations undertaken. Navigation operations were done primarily with touchpad input, while non-navigation instructions were carried out primarily using speech input. (2) Multimodal error correction was not prevalent. Repeating a failed operation until it succeeded and trying other methods in the same input modality were dominant error-correction strategies. (3) The modality learned first was not necessarily the primary modality used later, but a training order effect existed. These empirical results provide guidelines for designing non-visual multimodal input and create a comparison baseline for a subsequent study with blind users. © 2006 IEEE.
Identifier
33749593069 (Scopus)
ISBN
[0769525075, 9780769525075]
Publication Title
Proceedings of the Annual Hawaii International Conference on System Sciences
External Full Text Location
https://doi.org/10.1109/HICSS.2006.377
ISSN
15301605
First Page
123
Volume
6
Recommended Citation
Chen, Xiaoyu and Tremaine, Marilyn, "Patterns of multimodal input usage in non-visual information navigation" (2006). Faculty Publications. 18766.
https://digitalcommons.njit.edu/fac_pubs/18766
