Seungone Kim

M.S. Student at KAIST AI

seungone@kaist.ac.kr

My HTML Page
About
Hello! I am a M.S. Student at the Language & Knowledge Lab at KAIST, advised by Minjoon Seo.

My primary research focus lies in the intersection of natural language generation (NLG) and establishing a science of language model behaviors. Concretely, my research interests include: (i) developing fine-grained evaluation frameworks that systematically identify what specific capabilities language models lack and (ii) exploring the role of synthetic data to induce desired abilities into language models for further improvement.

I am actively looking for a PhD position at the US (admission for Fall 2024)!
News
Jan 2024     Our Prometheus and Flask paper got accepted to ICLR 2024!
Jan 2024     A preprint of our Prometheus-Vision paper is released w/ Data & Code!
Oct 2023     Our Prometheus and Flask paper got accepted to Neurips 2023 Instruction Following Workshop!
Oct 2023     Our CoT Collection paper got accepted to EMNLP 2023!
Apr 2023     Our ExpertLM paper got accepted to ICML 2023!
Feb 2023     Our CoTEVer paper got accepted to EACL 2023 (Demo Track)!
Oct 2022     I got accepted at KAIST AI as a MS student. I will continue doing research at LK Lab.
Oct 2022     Our SICK paper got accepted to COLING 2022!

Education

KAIST AIMar. 2023 - Present

M.S. in Artificial Intelligence (Advisor: Minjoon Seo)

Yonsei UniversityMar. 2018 - Feb. 2023

B.S. in Computer Science

Publications

LangBridge: Multilingual Reasoning without Multilingual Supervision

Dongkeun Yoon, Joel Jang, Sungdong Kim, Seungone Kim, Sheikh Shafayat, Minjoon Seo

Preprint Under Review

Prometheus-Vision: Vision-Language Model as a Judge for Fine-grained Evaluation

Seongyun Lee, Seungone Kim, Sue Hyun Park, Geewook Kim, Minjoon Seo

Preprint Under Review

Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?

Guijin Son, Sangwon Baek, Sangdae Nam, Ilgyun Jeong, Seungone Kim

Preprint Under Review

Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging

Joel Jang, Seungone Kim, Bill Yuchen Lin, Yizhong Wang, Jack Hessel, Luke Zettlemoyer, Hannaneh Hajishirzi, Yejin Choi, Prithviraj Ammanabrolu

Preprint Under Review

Prometheus: Inducing Fine-grained Evaluation Capability in Language Models

Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, Minjoon Seo

ICLR 2024 & NeurIPS 2023 Instruction Following Workshop

FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets

Seonghyeon Ye, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, Minjoon Seo

ICLR 2024 & NeurIPS 2023 Instruction Following Workshop

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-tuning

Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo

EMNLP 2023

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo

ICML 2023

CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification

Seungone Kim, Se June Joo, Yul Jang, Hyungjoo Chae, Jinyoung Yeo

EACL 2023

Mind the Gap! Injecting Commonsense Knowledge for Abstractive Dialogue Summarization

Seungone Kim, Se June Joo, Hyungjoo Chae, Chaehyeong Kim, Seung-won Hwang, Jinyoung Yeo

COLING 2022

( * indicates equal contribution )

Vitæ

Full CV in PDF.

  • AML Lab @ LG AI Research Jan. 2024 - Present
    Research Intern (Mentor: Kyungjae Lee)
    Working on building a comprehensive NLG benchmark.
  • Language Lab @ Naver AI Lab Mar. 2023 - Dec. 2023
    Research Intern (Mentor: Jamin Shin)
    Worked on building an open-sourced evaluator LM & VLM that could potentially replace GPT-4 and GPT-4V Evaluation.
  • KAIST AI Mar. 2023 - Aug. 2024 (Expected)
    M.S. in Artificial Intelligence (Advisor: Minjoon Seo)
    Working on inducing desired generation behaviors into LMs using synthetic data.
  • LK Lab @ KAIST AI Jul. 2022 - Feb. 2023
    Research Intern (Mentor: Joel Jang)
    Worked on building distributed expert language models to tackle the weakness of a single instruction tuned model.
  • Data Intelligence Lab @ SNU Sep. 2021 - Jun. 2022
    Research Intern (Mentor: Dohyun Lee)
    Worked on building a multilingual language model for low-resource languages via continual cross-lingual pre-training.
  • Yonsei University Mar. 2018 - Feb. 2023
    B.S. in Computer Science
    Early Graduation (7 semesters)