Seungone Kim

Ph.D. Student at CMU LTI

seungone@cmu.edu

My HTML Page
About
Hello! I am a Ph.D. Student at Carnegie Mellon University Language Technologies Institute, co-advised by Graham Neubig and Sean Welleck. I was a research intern at FAIR (Meta) in 2025 and will join again in Summer 2026. I obtained my M.S. in AI at KAIST AI, where I was fortunate to be advised by Minjoon Seo. Prior to that, I was a research intern at NAVER AI Lab and LG AI Research, and did my B.S. in CS at Yonsei University.

My primary research focus is centered around LLM Evaluation and AI for Science. Particularly, I aim to develop better (1) evaluation frameworks/benchmarks to systematically identify weaknesses in LLMs and AI scientist agents and (2) synthetic data generation/post-training/inference methods to solve the most challenging problems in science and engineering domains.

I am hosting weekly office hours to discuss about research projects or talk about Ph.D. application. Please sign up at this Calendly Link!

News
April 2025     Our RefineBench, CoT Encyclopedia, OptimalThinkingBench, and VideoJudge papers are accepted at ICLR 2026!
December 2025     I've reached 2,000 citations!
December 2025     Our RefineBench paper was selected for the Best Runner-Up Paper at the Multi-turn Interaction LLM Workshop (@ NeurIPS 2025)!
April 2025     I've reached 1,000 citations!
April 2025     Our BiGGen Bench paper was selected for the Best Paper Award at NAACL 2025!
October 2024     I received the NEC Student Research Fellowship which will thankfully support my research on harnessing synthetic data for improving LLMs!
Mar 2024     I got admitted at Carnegie Mellon University Language Technologies Institute as a Ph.D. student.
December 2023     Our FLASK paper was selected for the Honorable Mention Award at the Workshop on Instruction Tuning and Instruction Following (@ NeurIPS 2023)!
Oct 2022     I got admitted at KAIST AI as a M.S. student.

Education

Language Technologies Institute, Carnegie Mellon UniversitySep. 2024 - Present

Ph.D. in Computer Science (Advisors: Graham Neubig and Sean Welleck)

KAIST AIMar. 2023 - Aug. 2024

M.S. in Artificial Intelligence (Advisor: Minjoon Seo)

Yonsei UniversityMar. 2018 - Feb. 2023

B.S. in Computer Science

Publications

Preprints

SPICE: Self-play in corpus environments improves reasoning

Bo Liu, Chuanyang Jin, Seungone Kim, Weizhe Yuan, Wenting Zhao, Ilia Kulikov, Xian Li, Sainbayar Sukhbaatar, Jack Lanchantin, Jason Weston

Preprint Under Review

Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning

Maggie Huan, Yuetai Li, Tuney Zheng, Xiaoyu Xu, Seungone Kim, Minxin Du, Radha Poovendran, Graham Neubig, Xiang Yue

Preprint Under Review

Datasheets Aren't Enough: DataRubrics for Automated Quality Metrics and Accountability

Genta Indra Winata, David Anugraha, Emmy Liu, Alham Fikri Aji, Shou-Yi Hung, Aditya Parashar, Patrick Amadeus Irawan, Ruochen Zhang, Zheng-Xin Yong, Jan Christian Blaise Cruz, Niklas Muennighoff, Seungone Kim, Hanyang Zhao, Sudipta Kar, Kezia Erina Suryoraharjo, M Farid Adilazuarda, En-Shiun Annie Lee, Ayu Purwarianti, Derry Tanti Wijaya, Monojit Choudhury

Preprint Under Review

Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators

Seungone Kim*, Ian Wu*, Jinu Lee*, Xiang Yue, Seongyun Lee, Mingyeong Moon, Kiril Gashteovski, Carolin Lawrence, Julia Hockenmaier, Graham Neubig, Sean Welleck

Preprint Under Review

FREESON: Retriever-Free Retrieval-Augmented Reasoning via Corpus-Traversing MCTS

Chaeeun Kim, Seungone Kim

Preprint Under Review

MM-Eval: A Multilingual Meta-Evaluation Benchmark for LLM-as-a-Judge and Reward Models

Guijin Son, Dongkeun Yoon, Juyoung Suk, Javier Aula-Blasco, Mano Aslan, Vu Trong Kim, Shayekh Bin Islam, Jaume Prats-Cristia, Luc'ia Tormo-Banuelos, Seungone Kim

Preprint Under Review

2026

RefineBench: Evaluating Refinement Capability of Language Models via Checklists

Young-Jun Lee*, Seungone Kim*, Byung-Kwan Lee, Minkyeong Moon, Yechan Hwang, Jong Myoung Kim, Graham Neubig, Sean Welleck, Ho-Jin Choi

ICLR 2026

The CoT Encyclopedia: Analyzing, Predicting, and Controlling how a Reasoning Model will Think

Seongyun Lee*, Seungone Kim*, Minju Seo, Yongrae Jo, Dongyoung Go, Hyeonbin Hwang, Jinho Park, Xiang Yue, Sean Welleck, Graham Neubig, Moontae Lee, Minjoon Seo

ICLR 2026

OptimalThinkingBench: Evaluating over and underthinking in LLMs

Pranjal Aggarwal, Seungone Kim, Jack Lanchantin, Sean Welleck, Jason Weston, Ilia Kulikov, Swarnadeep Saha

ICLR 2026

VideoJudge: Bootstrapping Enables Scalable Supervision of MLLM-as-a-Judge for Video Understanding

Abdul Waheed, Zhen Wu, Dareen Alharthi, Seungone Kim, Bhiksha Raj

ICLR 2026

2025

Reasoning Models Better Express Their Confidence

Dongkeun Yoon, Seungone Kim, Sohee Yang, Sunkyoung Kim, Soyeon Kim, Yongil Kim, Eunbi Choi, Yireun Kim, Minjoon Seo

NeurIPS 2025

Web-Shepherd: Advancing PRMs for Reinforcing Web Agents

Hyungjoo Chae, Sunghwan Kim, Junhee Cho, Seungone Kim, Seungjun Moon, Gyeom Hwangbo, Dongha Lim, Minjin Kim, Yeonjun Hwang, Minju Gwak, Dongwook Choi, Minseok Kang, Gwanhoon Im, ByeongUng Cho, Hyojun Kim, Jun Hee Han, Taeyoon Kwon, Minju Kim, Beong-woo Kwak, Dongjin Kang, Jinyoung Yeo

NeurIPS 2025 (Spotlight)

Measuring Sycophancy of Language Models in Multi-turn Dialogues

Jiseung Hong, Grace Byun, Seungone Kim, Kai Shu, Jinho D. Choi

EMNLP 2025

M-Prometheus: A Suite of Open Multilingual LLM Judges

Jose Pombal, Dongkeun Yoon, Patrick Fernandes, Ian Wu, Seungone Kim, Ricardo Rei, Graham Neubig, Andre F.T. Martins

COLM 2025

Let's Predict Sentence by Sentence

Hyeonbin Hwang, Byeongguk Jeon, Seungone Kim, Jiyeon Kim, Hoyeon Chang, Sohee Yang, Seungpil Won, Dohaeng Lee, Youbin Ahn, Minjoon Seo

COLM 2025 RAM2 Workshop (Oral)

Evaluating Language Models as Synthetic Data Generators

Seungone Kim, Juyoung Suk, Xiang Yue, Vijay Viswanathan, Seongyun Lee, Yizhong Wang, Kiril Gashteovski, Carolin Lawrence, Sean Welleck, Graham Neubig

ACL 2025

LLM-as-an-Interviewer: Beyond Static Testing Through Dynamic LLM Evaluation

Eunsu Kim, Juyoung Suk, Seungone Kim, Niklas Muennighoff, Dongkwan Kim, Alice Oh

ACL 2025 Findings

The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models

Seungone Kim, Juyoung Suk, Ji Yong Cho, Shayne Longpre, Chaeeun Kim, Dongkeun Yoon, Guijin Son, Yejin Cho, Sheikh Shafayat, Jinheon Baek, Sue Hyun Park, Hyeonbin Hwang, Jinkyung Jo, Hyowon Cho, Haebin Shin, Seongyun Lee, Hanseok Oh, Noah Lee, Namgyu Ho, Se June Joo, Miyoung Ko, Yoonjoo Lee, Hyungjoo Chae, Jamin Shin, Joel Jang, Seonghyeon Ye, Bill Yuchen Lin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, Minjoon Seo

NAACL 2025 (Best Paper Award)

KMMLU: Measuring Massive Multitask Language Understanding in Korean

Guijin Son, Hanwool Lee, Sungdong Kim, Seungone Kim, Niklas Muennighoff, Taekyoon Choi, Cheonbok Park, Kang Min Yoo, Stella Biderman

NAACL 2025

Bridging the Data Provenance Gap Across Text, Speech, and Video

Shayne Longpre, Nikhil Singh, Manuel Cherep, Kushagra Tiwary, Joanna Materzynska, William Brannon, Robert Mahari, Manan Dey, Mohammed Hamdy, Nayan Saxena, Ahmad Mustafa Anis, Emad A. Alghamdi, Vu Minh Chien, Naana Obeng-Marnu, Da Yin, Kun Qian, Yizhi LI, Minnie Liang, An Dinh, Shrestha Mohanty, Deividas Mataciunas, Tobin South, Jianguo Zhang, Ariel N. Lee, Campbell S. Lund, Christopher Klamm, Damien Sileo, Diganta Misra, Enrico Shippole, Kevin Klyman, Lester James Validad Miranda, Niklas Muennighoff, Seonghyeon Ye, Seungone Kim, Vipul Gupta, Vivek Sharma, Xuhui Zhou, Caiming Xiong, Luis Villa, Stella Biderman, Alex Pentland, Sara Hooker, Jad Kabbara

ICLR 2025

Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages

Xiang Yue, Yueqi Song, Akari Asai, Seungone Kim, Jean de Dieu Nyandwi, Simran Khanuja, Anjali Kantharuban, Lintang Sutawika, Sathyanarayanan Ramamoorthy, Graham Neubig

ICLR 2025

Better Instruction-Following Through Minimum Bayes Risk

Ian Wu, Patrick Fernandes, Amanda Bertsch, Seungone Kim, Sina Pakazad, Graham Neubig

ICLR 2025 (Spotlight)

2024

Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models

Seungone Kim*, Juyoung Suk*, Shayne Longpre, Bill Yuchen Lin, Jamin Shin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, Minjoon Seo

EMNLP 2024

Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards

Hyeonbin Hwang, Doyoung Kim, Seungone Kim, Seonghyeon Ye, Minjoon Seo

EMNLP 2024 Findings

Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models

Hyungjoo Chae, Yeonghyeon Kim, Seungone Kim, Kai Tzu-iunn Ong, Beong-woo Kwak, Moohyeon Kim, Seonghwan Kim, Taeyoon Kwon, Jiwan Chung, Youngjae Yu, Jinyoung Yeo

EMNLP 2024

Consent in Crisis: The Rapid Decline of the AI Data Commons

Shayne Longpre, Robert Mahari, Ariel Lee, Campbell Lund, Hamidah Oderinwale, William Brannon, Nayan Saxena, Naana Obeng-Marnu, Tobin South, Cole Hunter, Kevin Klyman, Christopher Klamm, Hailey Schoelkopf, Nikhil Singh, Manuel Cherep, Ahmad Anis, An Dinh, Caroline Chitongo, Da Yin, Damien Sileo, Deividas Mataciunas, Diganta Misra, Emad Alghamdi, Enrico Shippole, Jianguo Zhang, Joanna Materzynska, Kun Qian, Kush Tiwary, Lester James Validad Miranda, Manan Dey, Minnie Liang, Mohammed Hamdy, Niklas Muennighoff, Seonghyeon Ye, Seungone Kim, Shrestha Mohanty, Vipul Gupta, Vivek Sharma, Vu Minh Chien, Xuhui Zhou, Yizhi Li, Caiming Xiong, Luis Villa, Stella Biderman, Hanlin Li, Daphne Ippolito, Sara Hooker, Jad Kabbara, Sandy Pentland

NeurIPS 2024

Aligning to Thousands of Preferences via System Prompt Generalization

Seongyun Lee*, Sue Hyun Park*, Seungone Kim, Minjoon Seo

NeurIPS 2024

Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging

Joel Jang, Seungone Kim, Bill Yuchen Lin, Yizhong Wang, Jack Hessel, Luke Zettlemoyer, Hannaneh Hajishirzi, Yejin Choi, Prithviraj Ammanabrolu

NeurIPS 2024 AFM Workshop (Oral)

Prometheus-Vision: Vision-Language Model as a Judge for Fine-grained Evaluation

Seongyun Lee*, Seungone Kim*, Sue Hyun Park, Geewook Kim, Minjoon Seo

ACL 2024 Findings

LangBridge: Multilingual Reasoning without Multilingual Supervision

Dongkeun Yoon, Joel Jang, Sungdong Kim, Seungone Kim, Sheikh Shafayat, Minjoon Seo

ACL 2024

Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?

Guijin Son*, Sangwon Baek, Sangdae Nam, Ilgyun Jeong, Seungone Kim*

ACL 2024

Prometheus: Inducing Fine-grained Evaluation Capability in Language Models

Seungone Kim*, Jamin Shin*, Yejin Cho*, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, Minjoon Seo

ICLR 2024

FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets

Seonghyeon Ye, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, Minjoon Seo

ICLR 2024 (Spotlight)

2023

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-tuning

Seungone Kim*, Se June Joo*, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo

EMNLP 2023

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo

ICML 2023

CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification

Seungone Kim, Se June Joo, Yul Jang, Hyungjoo Chae, Jinyoung Yeo

EACL 2023 Demo Track

2022

Mind the Gap! Injecting Commonsense Knowledge for Abstractive Dialogue Summarization

Seungone Kim*, Se June Joo*, Hyungjoo Chae*, Chaehyeong Kim, Seung-won Hwang, Jinyoung Yeo

COLING 2022

( * indicates equal contribution )

Vitæ

Full CV in PDF.

  • FAIR @ Meta May. 2026 - Dec. 2026
    Research Intern (Mentors: Jason Weston)
    TBD
  • FAIR @ Meta May. 2025 - Dec. 2025
    Research Intern (Mentors: Ilia Kulikov, Jason Weston)
    Worked on developing a synthetic dataset that improves reasoning capabilities of LMs.
  • CMU LTI Aug. 2024 - Present
    Ph.D. in Computer Science (Advisors: Graham Neubig and Sean Welleck)
    Working on LLM Evaluation and AI for Science.
  • AML Lab @ LG AI Research Jan. 2024 - Jun. 2024
    Research Intern (Mentor: Kyungjae Lee)
    Worked on building a comprehensive NLG benchmark that could mimic the fineness of human evaluation.
  • Language Lab @ Naver AI Lab Mar. 2023 - Dec. 2023
    Research Intern (Mentor: Jamin Shin)
    Worked on building an open-sourced evaluator LM & VLM that could potentially replace GPT-4 and GPT-4V Evaluation.
  • KAIST AI Mar. 2023 - Jul. 2024
    M.S. in Artificial Intelligence (Advisor: Minjoon Seo)
    Worked on developing evaluator LM & VLMs and Chain-of-Thought fine-tuning. Early Graduation (3 semesters).
  • LK Lab @ KAIST AI Jul. 2022 - Feb. 2023
    Research Intern (Mentor: Joel Jang)
    Worked on developing expert LMs that can generalize to novel tasks.
  • Yonsei University Mar. 2018 - Feb. 2023
    B.S. in Computer Science
    Early Graduation (7 semesters).