ADELIE: Aligning Large Language Models on Information Extraction
The paper "**ADELIE**" introduces a novel approach to improving Information Extraction (IE) capabilities in Large Language Models through *alignment learning*. The research presents a framework that enhances LLMs' ability to extract structured information from unstructured text without requiring extensive task-specific training data. The authors demonstrate how **reward modeling** and **reinforcement learning** can be used to align LLM outputs with desired IE formats and standards. The framework incorporates *self-consistency checking*, *format validation*, and *semantic verification* to ensure high-quality extractions. Results show significant improvements across various IE tasks, including *named entity recognition*, *relation extraction*, and *event detection*. ADELIE achieves state-of-the-art performance while requiring minimal human supervision, making it particularly valuable for real-world applications where labeled data is scarce. The paper also addresses challenges in maintaining extraction accuracy while ensuring output consistency and format compliance.