About me!

My research is primarily centered around exploring the factuality and interpretability within the context of Large Language Models and Vision Language Models.
Nowadays, I am also on my visit in Prof. Manling Li's group in NorthWestern University and Prof. Junxian He's group in HKUST!
Some of my past research works are:
-
Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging
Shiqi Chen*, Jinghan Zhang*, Tongyao Zhu, Wei Liu, Siyang Gao, Miao Xiong, Manling Li, Junxian He
ICML 2025 -
Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas
Shiqi Chen, Tongyao Zhu, Ruochen Zhou, Jinghan Zhang, Siyang Gao, Juan Carlos Niebles, Mor Geva, Junxian He, Jiajun Wu, Manling Li
ICML 2025 -
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Shiqi Chen*, Miao Xiong*, Junteng Liu, Zhengxuan Wu, Teng Xiao, Siyang Gao, Junxian He
ICML 2024 -
FELM: Benchmarking Factuality Evaluation of Large Language Models
Shiqi Chen, Yiran Zhao, Jinghan Zhang, I-Chun Chern, Siyang Gao, Pengfei Liu, Junxian He
NeurIPS 2023(Datasets and Benchmarks track) -
Evaluating Factual Consistency of Summaries with Large Language Models
Shiqi Chen, Siyang Gao, Junxian He
Arxiv -
On the Universal Truthfulness Hyperplane Inside LLMs
Junteng Liu, Shiqi Chen, Yu Cheng, Junxian He
EMNLP 2024 -
Composing Parameter-Efficient Modules with Arithmetic Operations
Jinghan Zhang, Shiqi Chen, Junteng Liu, Junxian He
NeurIPS 2023 -
FacTool: Factuality Detection in Generative AI–A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
Arxiv
Check out the Google scholar pages for more recent publications.
Posts
subscribe via RSS