About me
Research Goal: I study robustness and generalization in deep learning, combining theoretical and empirical approaches to help build more reliable AI systems. My current focus is LLM safety (jailbreaking, confidence calibration, uncertainty estimation, etc.), advancing research that drives practical solutions for responsible AI deployment.
I recently completed my PhD in Computer Science at the University of Toronto, advised by Amir-massoud Farahmand and Rich Zemel.
Refer to Research and Publications, and CV for more information about my academic background.
Recent News
Sep 2025: Joined RBC Borealis as a visiting researcher
May 2025: One paper on LLM jailbreaking accepted at ICML 2025 (spotlight)
Jul 2024: One paper on improving adversarial transferability accepted at ECCV 2024
Apr 2024: One survey paper on adversarial transferability accepted at TMLR
Apr 2024: Selected as a DAAD AInet fellow for the Postdoc-NeT-AI program on Safety and Security in AI
Nov 2023: One journal paper on understanding model robustness accepted at TMLR with Featured Certification (ICLR 2024 journal-to-conference track)
Sep 2022: One paper on data augmentation accepted at BMVC 2022
