Show simple item record

dc.contributor.advisorMirza-Babaei, Pejman
dc.contributor.authorNova, Atiya Nowshin
dc.date.accessioned2022-09-06T18:29:19Z
dc.date.available2022-09-06T18:29:19Z
dc.date.issued2022-07-01
dc.identifier.urihttps://hdl.handle.net/10155/1516
dc.description.abstractWithin games user research (GUR) predictive methods like expert evaluation are good for getting easy insights on a game in development but may not accurately reflect the player experience. On the other hand, experimental methods like playtesting can accurately capture the player experience but are time consuming and resource intensive. AI agents have been able to mitigate the issues of playtesting, and the data generated from these agents can supplement expert evaluation. To that end we introduce PathOS+. This tool allows the simulations of agents and has features that allows users to conduct their evaluations in the same place as the game, and then export their findings. We ran a study to evaluate how PathOS+ fares as an expert evaluation tool with participants of varying levels of UR experience. The results show that it is viable to use AI to identify design problems and offer more validity to expert evaluation.en
dc.description.sponsorshipUniversity of Ontario Institute of Technologyen
dc.language.isoenen
dc.subjectExpert evaluationen
dc.subjectPlaytestingen
dc.subjectArtificial Intelligenceen
dc.subjectGames user researchen
dc.subjectVideo gamesen
dc.titleAiding the experts: how artificial intelligence can augment expert evaluation with PathOS+en
dc.typeThesisen
dc.degree.levelMaster of Science (MSc)en
dc.degree.disciplineComputer Scienceen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record