Show simple item record

dc.contributor.advisorCollins, Christopher
dc.contributor.advisorLivingstone, Steven
dc.contributor.authorKeith, Adeliz
dc.date.accessioned2024-01-23T20:02:47Z
dc.date.available2024-01-23T20:02:47Z
dc.date.issued2023-12-01
dc.identifier.urihttps://hdl.handle.net/10155/1723
dc.description.abstractComputer-based methods for displaying and formatting texts for speed reading, including the popular Rapid Serial Visual Presentation method, are increasingly popular in both research and commercial applications. However, these techniques tend to be intrusive and optimized only for maximally efficient speed reading. Here, we present a technique for multimodal text layouts utilizing typographical cuing, making the text responsive to the user’s reading behavior as observed through eye tracking. We present a quantitative and qualitative evaluation of our work; finding high usability metrics and positive qualitative feedback, but no significant effects on task performance. Our work serves to replicate and extend prior work on gaze-aware, attentive documents.en
dc.description.sponsorshipUniversity of Ontario Institute of Technologyen
dc.language.isoenen
dc.subjectReadingen
dc.subjectEye trackingen
dc.subjectAttentive documentsen
dc.titleRead, skim, scan: gaze -aware documents as implicit feedback for multimodal typographical cuingen
dc.typeThesisen
dc.degree.levelMaster of Science (MSc)en
dc.degree.disciplineComputer Scienceen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record