The rapid integration of Artificial Intelligence (AI) systems across diverse domains has led to transformative advancements, accompanied by substantial concerns about their security and trustworthiness. This thesis explores how to make AI systems more secure and trustworthy. It introduces methods to 1) detect backdoor attacks that compromise the model integrity, 2) protect the privacy of training data, 3) ensure accountability of AI code generater, and 4) safeguard personal data from unauthorized use. By addressing these critical issues, the research aims to enhance the security and reliability of AI technologies, ultimately benefiting society by fostering more secure and trustworthy AI applications.
History
Thesis type
Thesis (PhD)
Thesis note
Thesis submitted for the Degree of Doctor of Philosophy, Swinburne University of Technology, 2024.