About me!
My research is primarily centered around exploring the factuality and interpretability within the context of Large Language Models.
Nowadays, I am also on my visit in Prof. Junxian He's group in HKUST, You're welcome to come and join me for board games at CityU or HKUST!
Some of my past research works are:
-
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation)
Shiqi Chen*, Miao Xiong*, Junteng Liu, Zhengxuan Wu, Teng Xiao, Siyang Gao, Junxian He
ICML 2024 -
FELM: Benchmarking Factuality Evaluation of Large Language Models
Shiqi Chen, Yiran Zhao, Jinghan Zhang, I-Chun Chern, Siyang Gao, Pengfei Liu, Junxian He
NeurIPS 2023(Datasets and Benchmarks track) -
Evaluating Factual Consistency of Summaries with Large Language Models
Shiqi Chen, Siyang Gao, Junxian He
Arxiv -
On the Universal Truthfulness Hyperplane Inside LLMs
Junteng Liu, Shiqi Chen, Yu Cheng, Junxian He
EMNLP 2024 -
Composing Parameter-Efficient Modules with Arithmetic Operations
Jinghan Zhang, Shiqi Chen, Junteng Liu, Junxian He
NeurIPS 2023 -
FacTool: Factuality Detection in Generative AI–A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
Arxiv
Check out the Google scholar pages for more recent publications.
Posts
subscribe via RSS