Show simple item record

dc.contributor.advisorMahmoud, Qusay
dc.contributor.authorLescisin, Michael John
dc.date.accessioned2019-04-08T14:02:35Z
dc.date.accessioned2022-03-29T16:49:06Z
dc.date.available2019-04-08T14:02:35Z
dc.date.available2022-03-29T16:49:06Z
dc.date.issued2019-04-01
dc.identifier.urihttps://hdl.handle.net/10155/1019
dc.description.abstractSecurity and privacy in computer systems is becoming an ever important field of study as the information available on these systems is of ever increasing value. The state of research on direct security attacks to computer systems, such as exploiting memory safety errors or exploiting unfiltered inputs to shells is at an advanced state and a rich set of security testing tools are available for testing software against these common types of attacks. Machine-learning based intrusion detection systems which monitor system activity for suspicious patterns are also available and are commonly deployed in production environments. What is missing, however, is the consideration of implicit information flows, or side-channels. One significant factor which has been holding back development on side-channel detection and mitigation is the very broad scope of the topic. Research in this topic has revealed side-channels formed by observable signals such as acoustic noise from a CPU, encrypted network traffic patterns, and ambient monitor light. Furthermore, there currently exists no portable method for distributing test cases for side-channels - as does for other security tests such as recon-ng for network footprinting. This thesis introduces a framework based on interoperable components for the purpose of modelling an adversary and generating feedback on what the adversary is capable of learning through the monitoring of a myriad of adversary-observable side-channel information sources. The framework operates by monitoring two data streams; the first being the stream of adversary-observable side-channel cues, and the second being the stream of private system activity. These data streams are ultimately used for the training and evaluation of a selected machine learning classifier to determine its performance of private system activity prediction. A prototype has been built to evaluate the effects of side-channel information leaks on five common computer system use cases.en
dc.description.sponsorshipUniversity of Ontario Institute of Technologyen
dc.language.isoenen
dc.subjectComputer systemsen
dc.subjectInformation sourcesen
dc.subjectMachine learning classifieren
dc.subjectAdversary-observable side-channels cuesen
dc.subjectPrivate system activityen
dc.titleA monitoring framework for side-channel information leaksen
dc.typeThesisen
dc.degree.levelMaster of Applied Science (MASc)en
dc.degree.disciplineElectrical and Computer Engineeringen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record