Read, skim, scan: gaze -aware documents as implicit feedback for multimodal typographical cuing
Abstract
Computer-based methods for displaying and formatting texts for speed reading, including the popular Rapid Serial Visual Presentation method, are increasingly popular in both research and commercial applications. However, these techniques tend to be intrusive and optimized only for maximally efficient speed reading. Here, we present a technique for multimodal text layouts utilizing typographical cuing, making the text responsive to the user’s reading behavior as observed through eye tracking. We present a quantitative and qualitative evaluation of our work; finding high usability metrics and positive qualitative feedback, but no significant effects on task performance. Our work serves to replicate and extend prior work on gaze-aware, attentive documents.