Deep neural networks (DNNs) have reshaped the computer vision research. However, studies have shown that by malicious crafting human imperceptible adversarial perturbations on normal samples, DNNs can be fooled by adversarial examples with a high level of confidence. The failures raise security and safety concerns on the applicability of DNNs in the real world. This thesis investigate DNNs’ behavior through adversarial examples, with the aim of achieving more secure and robust deep learning in the real world. Specifically, we study the real-world threat caused by adversarial examples for computer vision applications.
History
Thesis type
Thesis (PhD)
Thesis note
A thesis submitted in fulfillment of the requirements for the degree of Doctor of Philosophy, Swinburne University of Technology, May 16, 2022