Seungone Kim

Ph.D. Student at CMU LTI

seungone@cmu.edu

My HTML Page
About
Hello! I am a Ph.D. Student at the Carnegie Mellon University Language Technologies Institute, co-advised by Graham Neubig and Sean Welleck. I obtained my M.S. degree in Artificial Intelligence at KAIST AI, where I was fortunate to be advised by Minjoon Seo. Prior to that, I was a research intern at NAVER AI Lab and LG AI Research, and did my B.S. in Computer Science at Yonsei University.

My primary research focus is to establish a science of language model behaviors. Concretely, my research interests include: (i) developing LLM Evaluation frameworks (e.g., LLM-as-a-Judge, Meta-Evaluation) that systematically identify what specific capabilities language models lack and (ii) looping back the insights acquired from evaluation to train stronger LMs for Weak-to-Strong generalization (e.g., reward overoptimization, training with verbal feedback).

I am hosting weekly office hours to discuss about research projects or talk about Ph.D. application. Please sign up at this Calendly Link! If you are seeking for mentorship, please send an email that briefly mentions what you have done and what you'd like to work on with me :)
News
September 2024     Our System Message Generalization and Consent in Crisis papers got accepted to NeurIPS 2024!
September 2024     Our Prometheus 2 and Think-and-Execute papers got accepted to EMNLP 2024 and Self-Explore to EMNLP 2024 Findings!
May 2024     Our LangBridge and Multi-Task Inference papers got accepted to ACL 2024 and Prometheus-Vision to ACL 2024 Findings!
Mar 2024     I got admitted at Carnegie Mellon University Language Technologies Institute as a Ph.D. student.
Jan 2024     Our Prometheus and Flask papers got accepted to ICLR 2024!
Oct 2023     Our CoT Collection paper got accepted to EMNLP 2023!
Apr 2023     Our ExpertLM paper got accepted to ICML 2023!
Feb 2023     Our CoTEVer paper got accepted to EACL 2023 (Demo Track)!
Oct 2022     I got admitted at KAIST AI as a M.S. student. I will continue doing research at LK Lab.
Oct 2022     Our SICK paper got accepted to COLING 2022!

Education

Language Technologies Institute, Carnegie Mellon UniversitySep. 2024 - Present

Ph.D. in Computer Science (Advisors: Graham Neubig and Sean Welleck)

KAIST AIMar. 2023 - Aug. 2024

M.S. in Artificial Intelligence (Advisor: Minjoon Seo)

Yonsei UniversityMar. 2018 - Feb. 2023

B.S. in Computer Science

Publications

2025

The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models

Seungone Kim, Juyoung Suk, Ji Yong Cho, Shayne Longpre, Chaeeun Kim, Dongkeun Yoon, Guijin Son, Yejin Cho, Sheikh Shafayat, Jinheon Baek, Sue Hyun Park, Hyeonbin Hwang, Jinkyung Jo, Hyowon Cho, Haebin Shin, Seongyun Lee, Hanseok Oh, Noah Lee, Namgyu Ho, Se June Joo, Miyoung Ko, Yoonjoo Lee, Hyungjoo Chae, Jamin Shin, Joel Jang, Seonghyeon Ye, Bill Yuchen Lin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, Minjoon Seo

Preprint Under Review

2024

Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models

Seungone Kim*, Juyoung Suk*, Shayne Longpre, Bill Yuchen Lin, Jamin Shin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, Minjoon Seo

EMNLP 2024

Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards

Hyeonbin Hwang, Doyoung Kim, Seungone Kim, Seonghyeon Ye, Minjoon Seo

EMNLP 2024 Findings

Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models

Hyungjoo Chae, Yeonghyeon Kim, Seungone Kim, Kai Tzu-iunn Ong, Beong-woo Kwak, Moohyeon Kim, Seonghwan Kim, Taeyoon Kwon, Jiwan Chung, Youngjae Yu, Jinyoung Yeo

EMNLP 2024

Consent in Crisis: The Rapid Decline of the AI Data Commons

Shayne Longpre, Robert Mahari, Ariel Lee, Campbell Lund, Hamidah Oderinwale, William Brannon, Nayan Saxena, Naana Obeng-Marnu, Tobin South, Cole Hunter, Kevin Klyman, Christopher Klamm, Hailey Schoelkopf, Nikhil Singh, Manuel Cherep, Ahmad Anis, An Dinh, Caroline Chitongo, Da Yin, Damien Sileo, Deividas Mataciunas, Diganta Misra, Emad Alghamdi, Enrico Shippole, Jianguo Zhang, Joanna Materzynska, Kun Qian, Kush Tiwary, Lester James Validad Miranda, Manan Dey, Minnie Liang, Mohammed Hamdy, Niklas Muennighoff, Seonghyeon Ye, Seungone Kim, Shrestha Mohanty, Vipul Gupta, Vivek Sharma, Vu Minh Chien, Xuhui Zhou, Yizhi Li, Caiming Xiong, Luis Villa, Stella Biderman, Hanlin Li, Daphne Ippolito, Sara Hooker, Jad Kabbara, Sandy Pentland

NeurIPS 2024

Aligning to Thousands of Preferences via System Prompt Generalization

Seongyun Lee*, Sue Hyun Park*, Seungone Kim, Minjoon Seo

NeurIPS 2024

Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging

Joel Jang, Seungone Kim, Bill Yuchen Lin, Yizhong Wang, Jack Hessel, Luke Zettlemoyer, Hannaneh Hajishirzi, Yejin Choi, Prithviraj Ammanabrolu

ArXiv Preprint

KMMLU: Measuring Massive Multitask Language Understanding in Korean

Guijin Son, Hanwool Lee, Sungdong Kim, Seungone Kim, Niklas Muennighoff, Taekyoon Choi, Cheonbok Park, Kang Min Yoo, Stella Biderman

Preprint Under Review

Prometheus-Vision: Vision-Language Model as a Judge for Fine-grained Evaluation

Seongyun Lee*, Seungone Kim*, Sue Hyun Park, Geewook Kim, Minjoon Seo

ACL 2024 Findings

LangBridge: Multilingual Reasoning without Multilingual Supervision

Dongkeun Yoon, Joel Jang, Sungdong Kim, Seungone Kim, Sheikh Shafayat, Minjoon Seo

ACL 2024

Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?

Guijin Son*, Sangwon Baek, Sangdae Nam, Ilgyun Jeong, Seungone Kim*

ACL 2024

Prometheus: Inducing Fine-grained Evaluation Capability in Language Models

Seungone Kim*, Jamin Shin*, Yejin Cho*, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, Minjoon Seo

ICLR 2024

FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets

Seonghyeon Ye, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, Minjoon Seo

ICLR 2024

2023

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-tuning

Seungone Kim*, Se June Joo*, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo

EMNLP 2023

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo

ICML 2023

CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification

Seungone Kim, Se June Joo, Yul Jang, Hyungjoo Chae, Jinyoung Yeo

EACL 2023 Demo Track

2022

Mind the Gap! Injecting Commonsense Knowledge for Abstractive Dialogue Summarization

Seungone Kim*, Se June Joo*, Hyungjoo Chae*, Chaehyeong Kim, Seung-won Hwang, Jinyoung Yeo

COLING 2022

( * indicates equal contribution )

Vitæ

Full CV in PDF.

  • CMU LTI Aug. 2024 - Present
    Ph.D. in Computer Science (Advisors: Graham Neubig and Sean Welleck)
    Working on (V)LM Evaluation and Weak-to-Strong Generalization.
  • AML Lab @ LG AI Research Jan. 2024 - Jun. 2024
    Research Intern (Mentor: Kyungjae Lee)
    Worked on building a comprehensive NLG benchmark that could mimic the fineness of human evaluation.
  • Language Lab @ Naver AI Lab Mar. 2023 - Dec. 2023
    Research Intern (Mentor: Jamin Shin)
    Worked on building an open-sourced evaluator LM & VLM that could potentially replace GPT-4 and GPT-4V Evaluation.
  • KAIST AI Mar. 2023 - Aug. 2024 (Expected)
    M.S. in Artificial Intelligence (Advisor: Minjoon Seo)
    Worked on developing evaluator LM & VLMs and Chain-of-Thought fine-tuning. Early Graduation (3 semesters).
  • LK Lab @ KAIST AI Jul. 2022 - Feb. 2023
    Research Intern (Mentor: Joel Jang)
    Worked on developing expert LMs that can generalize to novel tasks.
  • Yonsei University Mar. 2018 - Feb. 2023
    B.S. in Computer Science
    Early Graduation (7 semesters).