Design and evaluation of GAN-based models for adversarial training robustness in deep learning
MetadataShow full item record
Adversarial attacks show one of the generalization issues of current deep learning models on special distribution shifted data. The adversarial samples generated by the attack algorithm can introduce malicious behavior to any deep learning system that affects the consistency of the deep learning model. This thesis presents the design and evaluation of multiple possible component architectures of a GAN that can provide a new direction for training a robust convolution classifier. Each component is related to a different aspect of the GAN that impacts the generalization and the robustness outcomes. The best formulation can achieve around 45% accuracy under 8/255 L∞ PGD attack and 60% accuracy under 128/255 L2 PGD attack that outperforms L2 PGD adversarial training. The other contributions include the research on gradient masking, robustness transferability across the constraints and the generalization limitations.
Showing items related by title, author, creator and subject.
Chauhan, Ravi (2020-12-01)IDS are essential components in preventing malicious traffic from penetrating networks. IDS have been rapidly enhancing their detection ability using ML algorithms. As a result, attackers look for new methods to evade the ...
Addas, Alaadin (2018-12-01)Fallback authentication (FA) techniques such as security questions, Email resets, and SMS resets have significant security flaws that easily undermine the primary method of authentication. Security questions have been shown ...
Lescisin, Michael John (2019-04-01)Security and privacy in computer systems is becoming an ever important field of study as the information available on these systems is of ever increasing value. The state of research on direct security attacks to computer ...