Show simple item record

dc.contributor.advisorSalehi-Abari, Amirali
dc.contributor.authorAsadian, Aryan
dc.date.accessioned2021-08-31T18:43:38Z
dc.date.accessioned2022-03-29T17:27:06Z
dc.date.available2021-08-31T18:43:38Z
dc.date.available2022-03-29T17:27:06Z
dc.date.issued2021-08-01
dc.identifier.urihttps://hdl.handle.net/10155/1325
dc.description.abstractDeep neural models have shown promising results in various areas, e.g., computer vision and natural language processing, at the cost of high computation and storage resource consumption. These characteristics of deep neural networks have acted as a barrier in resource-constraint environments, e.g., smartphones. Among numerous proposed approaches to mitigate this limitation, knowledge distillation has gained much attention due to its generalizability and simplicity in implementation. This thesis introduces the enhanced knowledge distillation (EKD), a simple yet effective approach to outperform the canonical knowledge distillation using multiple classifier heads at various teachers’ depths. First, multiple classifier heads are attached to the teacher model in different depths. The mounted heads benefit from the fully trained teacher model and converge fast while the backbone teacher is frozen. The cohort of all classifiers supervises the student in the last step. EKD showed superior performance in comparison with some of the state-of-the-art distillation frameworks.en
dc.description.sponsorshipUniversity of Ontario Institute of Technologyen
dc.language.isoenen
dc.subjectDeep learningen
dc.subjectKnowledge transferen
dc.subjectKnowledge distillationen
dc.subjectCapacity gapen
dc.subjectIntermediate representationsen
dc.titleEnhanced knowledge distillation by auxiliary classifiersen
dc.typeThesisen
dc.degree.levelMaster of Science (MSc)en
dc.degree.disciplineComputer Scienceen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record