Multimodal user input patterns in a non-visual context

Document Type

Conference Proceeding

Publication Date

1-1-2005

Abstract

How will users choose between speech and hand inputs to perform tasks when they are given equivalent choices between both modalities in a non-visual interface? This exploratory study investigates this question. The study was conducted using AudioBrowser, a non-visual information access for the visually impaired. Findings include: (1) Users chose between input modalities based on the type of operations undertaken. Navigation operations primarily used hand input on the touchpad, while non-navigation instructions primarily used speech input. (2) Surprisingly, multimodal error correction was not prevalent. Repeating a failed operation until it succeeded and trying other methods in the same input modality were dominant error-correction strategies. (3) The modality learned first was not necessarily the primary modality used later, but a training order effect existed. These empirical results provide implications for designing non-visual multimodal input dialogues.

Identifier

32344445893 (Scopus)

ISBN

[1595931597, 9781595931597]

Publication Title

Assets 2005 the Seventh International ACM Sigaccess Conference on Computers and Accessibility

External Full Text Location

https://doi.org/10.1145/1090785.1090832

First Page

206

Last Page

207

This document is currently not available here.

Share

COinS