Swinburne
Browse

Adversarial Attacks Against DNNs Towards Real-World Threat

Download (16.99 MB)
thesis
posted on 2024-07-12, 20:50 authored by Duan Ranjie
Deep neural networks (DNNs) have reshaped the computer vision research. However, studies have shown that by malicious crafting human imperceptible adversarial perturbations on normal samples, DNNs can be fooled by adversarial examples with a high level of confidence. The failures raise security and safety concerns on the applicability of DNNs in the real world. This thesis investigate DNNs’ behavior through adversarial examples, with the aim of achieving more secure and robust deep learning in the real world. Specifically, we study the real-world threat caused by adversarial examples for computer vision applications.

History

Thesis type

  • Thesis (PhD)

Thesis note

A thesis submitted in fulfillment of the requirements for the degree of Doctor of Philosophy, Swinburne University of Technology, May 16, 2022

Copyright statement

Copyright © 2022 Ranjie Duan.

Supervisors

Yun Yang

Language

eng

Usage metrics

    Theses

    Categories

    No categories selected

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC