Biography
I am a third-year CS PhD candidate at Georgia Institute of Technology, Atlanta, USA, advised by Prof. Ling Liu. Prior to that, I received my B.E./master degree from South China University of Technology, Guangzhou, China, advised by Prof. Weiwei Lin. My research interests include distributed machine learning, parallel and distributed computing, optimization algorithms and LLM security/safety alignment.
For now, my main research focus is to enhance large language model (LLM) safety, paving the critical path towards artificial general intelligence (AGI).
The research problem I am currently working on is defense towards harmful fine-tuning issue for LLMs. We are commited to design defenses from different angles to harmful fine-tuning. The currently avaialble defenses built in our group include:
- Alignment stage defense: Vaccine (NeurIPS2024), Booster
- Fine-tuning stage defense: Lisa (NeurIPS2024)
- Post-fine-tuning stage defense: Antidote
We preprint a survey to illustrate attack setting and existing attacks/defenses for harmful fine-tuning. A curated list is maintained in this repo. A slide is also available for illustration.
We always welcome different forms of collaboration. If you are interested, please reach out for discussion.
I am seeking for research intern opportunity on relevant research topics.
Publications
Conference
- T. Huang, S. Hu, L. Liu, “Vaccine: Perturbation-aware Alignment for Large Language Models against Harmful Fine-tuning,” NeurIPS2024 [arXiv] [homepage] [code]
- T. Huang, S. Hu, F. Ilhan, S. Tekin, L. Liu, “Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning,” NeurIPS2024 [arXiv] [code]
- S. Tekin, F. Ilhan, T. Huang, S. Hu, L. Liu, “LLM-TOPLA: Efficient LLM Ensemble by Maximising Diversity,” EMNLP 2024 (Findings) [Paper] [code]
- K. Chow, S. Hu, T. Huang, L. Liu, “Personalized Privacy Protection Mask Against Unauthorized Facial Recognition”, ECCV2024. [Paper]
- K. Chow, S. Hu, T. Huang, F. Ilhan, W. Wei, L. Liu, “Diversity-driven Privacy Protection Masks Against Unauthorized Face Recognition”, PET2024. [Paper]
- F. Ilhan, G. Su, S. Tekin, T. Huang, S. Hu, L. Liu, “Resource-Efficient Transformer Pruning for Finetuning of Large Models”, CVPR2024. [Paper]
- S.Hu, T. Huang, KH. Chow, W. Wei, Y. Wu, L. Liu. “ZipZap: Efficient Training of Language Models for Ethereum Fraud Detection”, WWW2024. [Paper]
- F. Ilhan, KH. Chow, S. Hu, T. Huang, S. Tekin, W. Wei, Y. Wu, M. Lee, R.Kompella, H. Latapie, G. Liu, L. Liu, “Adaptive Deep Neural Network Inference Optimization with EENet,” WACV2024 [Paper]
- T. Huang, S. Hu, KH. Chow, F. Ilhan, S. Tekin, L. Liu, “Lockdown: Backdoor Defense for Federated Learning with Isolated Subspace Training,” NeurIPS2023.[Paper] [Code]
- S.Hu, T. Huang, F. Ilhan, S. Tekin, L. Liu. “Large Language Model-Powered Smart Contract Vulnerability Detection: New Perspectives”, TPS2023. [Paper]
- F. Ilhan, S. Tekin, S. Hu, T. Huang, KH. Chow, L. Liu, “Hierarchical Deep Neural Network Inference for Device-Edge-Cloud Systems,” WWW2023 (short paper).[Paper]
- Y. Sun, L. Shen, T. Huang, and D. Tao, “FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy”, ICLR2023. [OpenReview]
Journal
- T. Huang, L. Shen, Y. Sun, W. Lin, and D. Tao, “Fusion of Global and Local Knowledge for Personalized Federated Learning,” 2023, Transactions on Machine Learning Research (TMLR). [OpenReview] [Code]
- T. Huang, W. Lin, L. Shen, K. Li and A. Y. Zomaya, “Stochastic Client Selection for Federated Learning with Volatile Clients,” 2022, IEEE Internet of Things Journals (IOT-J). [arXiv]
- T. Huang, W. Lin, X. Hong , X. Wang, Q. Wu, R. Li, CH. Hsu, AY. Zomaya, “Adaptive Processor Frequency Adjustment for Mobile Edge Computing with Intermittent Energy Supply”, 2021, IEEE Internet of Things Journals (IOT-J). [arXiv] [code]
- T. Huang, W. Lin, W. Wu, L. He, K. Li and AY. Zomaya, “An Efficiency-boosting Client Selection Scheme for Federated Learning with Fairness Guarantee,” 2020, IEEE Transactions on Parallel and Distributed Systems (TPDS) (Special Section on Parallel and Distributed Computing Techniques for AI, ML, and DL). [arXiv]
- T. Huang, W. Lin, C. Xiong, R. Pan and J. Huang, “An Ant Colony Optimization Based Multi-objective Service Replicas Placement Strategy for Fog Computing,” 2020, IEEE Transactions on Cybernetics (TCYB).
Preprint&OpenReview
- T. Huang, S. Liu, L. Shen, F. He, W. Lin, D. Tao, “Achieving Personalized Federated Learning with Sparse Local Models,” preprint [arXiv]
- T. Huang, G. Bhattacharya, P. Joshi, J. Kimball, L. Liu, “Antidote: Post-fine-tuning Safety Alignment for Large Language Models against Harmful Fine-tuning,” preprint [arXiv] [homepage]
- T. Huang, S. Hu, F. Ilhan, S. Tekin, L. Liu, “Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturbation,” preprint [arXiv] [homepage] [code]
- T. Huang, S. Hu, F. Ilhan, S. Tekin, L. Liu, “Harmful Fine-tuning Attacks and Defenses for Large Language Models: A Survey,” preprint [arXiv] [homepage] [slide]
Industrial Experience
Research intern at Dolby Laboratories, Atlanta, USA, with Gautam Bhattacharya, Pratik Joshi and Josh Kimball. (May. 2024 ~ August. 2024)
- Post-fine-tuning stage defense aginst harmful fine-tuning for LLMs.
Research intern at JD explore academy, Beijing, China, with Li Shen. (March. 2022 ~ June. 2022)
- Low-rank+sparse compression for personalized federated learning.
- Application of proximal algorithms.
Research intern at JD explore academy, Beijing, China, with Li Shen. (Jun. 2021 ~ Oct. 2021)
- Develop high-efficiency model compression algorithm for distributed ML.
- Optimization for Personalized Federated Learning.
Invited Talk
- “Harmful Fine-tuning Attacks and Defenses for LLMs”, RPI, hosted by Prof. Tianyi Chen (Nov, 2024)
Awards & Honors
- Outstanding reviewer of ICLR'24
- Top reviewer of NeurIPS'23
- Student Travel Grants of IEEE TPS, 2023
- National Scholarship for Graduate, 2021
- National Scholarship for Graduate, 2020
- The First-Class School Scholarship, 2019
Services
- Conference Reviewer: NeurIPS (‘23,‘24), ICLR (‘24,‘25), ICML'24, AAAI'25, AISTATS'25, AAAI'25-AIA
- Journal Reviewer: IEEE TMC, IEEE TCOM, IEEE TP, ACM TOIT, TMLR, IEEE TIFS