Eunhwan Park (박은환; 朴殷煥;)

I am a research engineer at Buzzni for my mandatory military service. I obtained M.S. in Computer Science from Jeonbuk National University (JBNU), where I was fortunate to be advised by Professor Seung-Hoon Na. Prior to JBNU, I obtained B.S. in Computer Science from Kookmin University in Feb 2021. Prior to Kookmin University, I completed of first phase certification of Software Maestro 6th from the government of the Republic of Korea in Dec 2015.

During M.S., I was fortunate to be interned at Naver.

Email  /  CV  /  Google Scholar  /  Twitter  /  GitHub

profile photo
🔥What's New
  • [2024. 02.] Our paper "RADCoT: Retrieval-Augmented Distillation to Specialization Models for Generating Chain-of-Thoughts in Query Expansion" get accepted to LREC-COLING 2024!
  • [2024. 01.] Our paper "Ask, Assess, and Refine: Rectifying Factual Consistency and Hallucination in LLMs with Metric-Guided Feedback Learning" get accepted to EACL 2024!
  • [2023. 12.] I presented the invited talk at Top Conference Session, KSC 2023.
  • [2023. 11.] Serves as ARR (ACL Rolling Review) Reviewer.
  • [2023. 03.] Started working at Buzzni as a Research Engineer, serving my mandatory military service!
  • [2023. 01.] Our paper "MAFiD: Moving Average Equipped Fusion-in-Decoder for Question Answering over Tabular and Textual Data" get accepted to EACL 2023 Findings!
  • [2022. 11.] Our paper "RINK: Reader-Inherited Evidence Reranker for Table-and-Text Open Domain Question Answering" co-worked with NAVER Corporation get accepted to AAAI 2023! Big thanks to my advisor and NAVER Corporation!
  • [2022. 08.] Our paper "SISER: Semantic-Infused Selective Graph Reasoning for Fact Verification" co-worked with NAVER Corporation get accepted to COLING 2022! Big thanks to my advisor and NAVER Corporation!
  • [2022. 02.] Our paper "LM-BFF-MS: Improving Few-Shot Fine-tuning of Language Models based on Multiple Soft Demonstration Memory" co-worked with NAVER Corporation get accepted to ACL 2022! Big thanks to my advisor and NAVER Corporation!

Experience

  • Buzzni: AI Research Engineer (Mar. 2023 - )
  • Naver: Internship (May. 2021 - Aug. 2021)
  • Software Maestro 6th: Mentee (July. 2015 - Nov. 2015)

Research

My research interests lie in the area of natural language processing as follows: Large Language Models, Factual Inconsistency, Multimodal, Knowledge Augmentation, Information Retrieval, etc.

I am passionated to (1) mitigate the hallucination of Large Language Models by leveraging feedback; (2) store and utilize the knowledge as the form of Pluggable Knowledge Memory.

Recently, I completed research [C5], [C6], have started research on storing and utilizing domain-specific knowledge as external memory and manipulating knowledge-level neuron for editing and unlearning.

  • Memorizing Task Vectors
  • Plug-and-Play Knowledge Injection Framework
  • Knowledge Unlearning for LLMs

Publication

(C = Conference, W = Workshop, P = Preprint, R = Under Review)

[P1]. Unleash the Potential of CLIP for Video Highlight Detection
Donghoon Han*, Seunghyeon Seo*, Eunhwan Park, Seong-Uk Nam, Nojun Kwak
arXiv Preprint
pdf
[C6]. RADCoT: Retrieval-Augmented Distillation to Specialization Models for Generating Chain-of-Thoughts in Query Expansion
Sung-Min Lee*, Eunhwan Park*, DongHyeon Jeon, Inho Kang, Seung-Hoon Na
Proceedings of LREC-COLING 2024
to appear
[C5]. Ask, Assess, and Refine: Rectifying Factual Consistency and Hallucination in LLMs with Metric-Guided Feedback Learning
Dongyub Lee*, Eunhwan Park*, Hodong Lee, Heuiseok Lim
Proceedings of EACL 2024
pdf

We introduce the framework: Ask, Assess, and Refine, which utilizes an explicit evaluation paradigm, incorporating metrics specifically tailored to assess citation errors and hallucination, aiming to address hallucination issue.

[C4]. MAFiD: Moving Average Equipped Fusion-in-Decoder for Question Answering over Tabular and Textual Data
Sung-Min Lee, Eunhwan Park, Daeryong Seo, Donghyeon Jeon, Inho Kang, Seung-Hoon Na
Proceedings of EACL Findings 2023
pdf

We extensively employ a Fusion-in-Decoder (FiD) and exponential moving average (EMA), proposing a Moving Average Equipped Fusion-in-Decoder, thereby handling long-range reasoning effectively.

[C3]. RINK: Reader-Inherited Evidence Reranker for Table-and-Text Open Domain Question Answering
Eunhwan Park, Sung-Min Lee, Daeryong Seo, Seonhoon Kim, Inho Kang, Seung-Hoon Na
Proceedings of AAAI 2023
pdf

We newly propose novel set-level reranking, which applies to sampled sets of blocks, and then the resulting set-level evidences are aggregated to compute the relevance score of an individual block.

[C2]. SISER: Semantic-Infused Selective Graph Reasoning for Fact Verification
Eunhwan Park*, Jong-Hyeon Lee*, Donghyeon Jeon, Seonhoon Kim, Inho Kang, Seung-Hoon Na
Proceedings of COLING 2022
pdf

We propose enhancing the reasoning ability by extensively exploiting additional semantic units for graph reasoning and integrating semantic-level reasoning with sequence reasoning and selective graph reasoning.

[C1]. LM-BFF-MS: Improving Few-Shot Fine-tuning of Language Models based on Multiple Soft Demonstration Memory
Eunhwan Park, Donghyeon Jeon, Seonhoon Kim, Inho Kang, Seung-Hoon Na
Proceedings of ACL 2022
source, pdf

We propose prompts with multiple soft demonstration memory based on the automatic generation of multiple label phrases and the use of soft demonstration memory that is armed with an auxiliary NDP task.

[W1]. JBNU-CCLab at SemEval-2022 Task 7: DeBERTa for Identifying Plausible Clarifications in Instructional Texts
Daewook Kang, Sung-Min Lee, Eunhwan Park, Seung-Hoon Na
Proceedings of NAACL 2022@SemEval
pdf
Professional Service
  • Reviewer: ACL Rolling Review (Aug. 2023 - )

Design and source code from Jon Barron's website.