This study addresses the need for transparency in machine learning models within cybersecurity, emphasizing the importance of comprehensible explanations to support trust and informed decision-making. It proposes a framework to enhance explanation in two critical areas: identifying suspicious cryptocurrency transactions to improve information-level security and examining Android software for malicious behavior to strengthen system-level security. By clarifying the decision-making processes of machine learning, this research aims to mitigate risks associated with opaque algorithms, fostering greater security, user trust, and reliability in digital threat detection and prevention.
History
Thesis type
Thesis (Masters by research)
Thesis note
Thesis submitted for the Degree of Masters by Research, Swinburne University of Technology, 2024.