Wenhan Chang

Wenhan Chang

PhD Researcher in AI Safety & Privacy

Zhongnan University of Economics and Law | Wuhan, China

Email: changwh530@gmail.com

Recent News
scroll
  • 2026.04.14 My paper "Zero-shot Class Unlearning via Layer-wise Relevance Analysis and Neuronal Path Perturbation" has been accepted by IEEE TDSC! 🎉
  • 2026.04.13 My paper "Unreal Thinking: Chain-of-Thought Hijacking via Two-stage Backdoor" is now available on arXiv 🌸
  • 2026.04.13 My personal homepage is officially updated and online! 🎉
  • 2025.05.31 My paper (corresponding author) "From Thinking to Output: Chain-of-Thought and Text Generation Characteristics in Reasoning Language Models" has been accepted by KSEM 2025! 🎉
  • 2025.05.23 My paper "Chain-of-Lure: A Universal Jailbreak Attack Framework using Unconstrained Synthetic Narratives" is now available on arXiv 🌸
  • 2024.04.15 My paper "Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning" is now available on arXiv 🌸
  • 2024.04.15 My paper "Gradient-based Defense Methods for Data Leakage in Vertical Federated Learning" has been accepted by Computers & Security! 🎉

About Me

I received my B.Eng. degree in 2022 and M.Eng. degree in 2025 from China University of Geosciences, Wuhan. I am currently pursuing a Ph.D. degree at School of Information Engineering, Zhongnan University of Economics and Law.

My research interests include security and privacy preserving in deep learning, with a focus on LLM safety alignment, poisoning attacks and defenses, and machine unlearning.

2025 - 2029 (Expected)

Ph.D. in Modern Technology Management

Zhongnan University of Economics and Law

Microeconomics, Econometrics, Business Intelligence

2022 - 2025

M.S. in Electronic Information

China University of Geosciences

Machine Learning, Information Security, Matrix Analysis

2018 - 2022

B.E. in Computer Science and Technology

China University of Geosciences

C++ Programming, Embedded Systems, Computer Network

Publications

As First/Corresponding Author

2026

Zero-shot Class Unlearning via Layer-wise Relevance Analysis and Neuronal Path Perturbation

W Chang, T Zhu, P Xiong*, Y Wu, F Guan, W Zhou

IEEE Transactions on Dependable and Secure Computing (CCF-A)

2026

Unreal Thinking: Chain-of-Thought Hijacking via Two-stage Backdoor

W Chang, T Zhu, P Xiong, F Guan, W Zhou

arXiv preprint arXiv:2604.09235

2025

Chain-of-Lure: A Synthetic Narrative-Driven Approach to Compromise Large Language Models

W Chang, T Zhu*, Y Zhao, S Song, P Xiong, W Zhou, Y Li

arXiv preprint arXiv:2505.17519

2025

From Thinking to Output: Chain-of-Thought and Text Generation Characteristics in Reasoning Language Models

J Liu, Z Xu, Y Fang, Y Chen, Z Ying, W Chang*

KSEM 2025 (CCF-C)

2024

Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning

W Chang, T Zhu*, H Xu, W Liu, W Zhou

arXiv preprint arXiv:2405.15662

2024

Gradient-based Defense Methods for Data Leakage in Vertical Federated Learning

W Chang, T Zhu*

Computers & Security, 139, 103744 (CCF-B)

As Co-author

2025

Large Language Models Merging for Enhancing the Link Stealing Attack on Graph Neural Networks

F Guan, T Zhu*, W Chang, W Ren, W Zhou

IEEE Transactions on Dependable and Secure Computing (CCF-A)

2025

TeleAI at SemEval-2025 Task 8: Advancing Table Reasoning Framework with Large Language Models

S Xiong, M Li, D Wang, Y Zhao, J Zhang, C Pan, H He, X Li, W Chang, et al.

SemEval 2025 (ACL Workshop)

2025

T2R-BENCH: A Benchmark for Real World Table-to-Report Task

J Zhang, C Pan, S Xiong, K Wei, Y Zhao, X Li, J Peng, X Gu, J Yang, W Chang, et al.

EMNLP 2025 (CCF-B)

2025

TableReasoner: Advancing Table Reasoning Framework with Large Language Models

S Xiong, D Wang, Y Zhao, J Zhang, C Pan, H He, X Li, W Chang, Z He, et al.

arXiv preprint arXiv:2507.08046

2024

Generative Adversarial Networks Unlearning

H Sun, T Zhu*, W Chang, W Zhou

IEEE Transactions on Dependable and Secure Computing (CCF-A)

2024

A Two-stage Model Extraction Attack on GANs with a Small Collected Dataset

H Sun, T Zhu*, W Chang, W Zhou

Computers & Security, 137, 103634 (CCF-B)

2020

Model Poisoning Defense on Federated Learning: A Validation Based Approach

Y Wang, T Zhu*, W Chang, S Shen, W Ren

NSS 2020 (EI)

* indicates corresponding author. Full list on Google Scholar

Research Interests

LLM Safety Alignment

Ensuring large language models behave safely and align with human values.

Privacy Preserving

Protecting sensitive information in machine learning systems.

Poisoning Attack & Defense

Studying adversarial attacks against ML systems and developing defenses.

Machine Unlearning

Removing specific data influence from trained models.