Jiaheng Wei is a highly motivated and accomplished Information Technology professional with a strong background in software engineering and machine learning. Currently, he is pursuing a PhD at AISSC school at RMIT, focusing on Trustworthy Distributed Machine Learning Systems. He demonstrated success in academic research and real-world application development, equipped with a comprehensive skill set spanning multiple programming languages and mathematical concepts.
Ph.D. Research Project
Build trustworthy and resilient distributed learning systems via identifying and resolving threats
The rapid advancement of distributed learning techniques significantly impacts sectors like medical diagnosis, user recommendation, etc. One main advantage of these techniques is that they enable collaborative model training without sharing raw data, thus preserving privacy. However, privacy regulations, underlying threats and public concerns pose significant challenges. Specifically, these systems are vulnerable to privacy threats (inversion and inference attacks), malicious attacks (poisoning and backdoor attacks), and issues of unlearning, fairness, and interpretability remain. This research focuses on addressing these challenges by utilizing the memorization phenomenon in neural networks. By understanding memorization, we aim to uncover undisclosed risks, improve defence strategies, and tackle challenges related to unlearning, fairness, and interpretability, ultimately striving to build trustworthy and resilient distributed learning systems.
Supervisors
- Dr.Chao Chen
- Prof.Kok-Leong Ong
Selected publications
- Wei, J., Zhang, Y., Zhang, L. Y., Chen, C., Pan, S., Ong, K. L., … & Xiang, Y. (2023). Client-side gradient inversion against federated learning from poisoning. arXiv preprint arXiv:2309.07415.
- Wei, J., Zhang, Y., Zhang, L. Y., Ding, M., Chen, C., Ong, K. L., … & Xiang, Y. (2024). Memorization in deep learning: A survey. arXiv preprint arXiv:2406.03880.