A Hybrid Neuro-Symbolic Pipeline for Coreference Resolution and AMR-Based Semantic Parsing
Large Language Models (LLMs) have transformed Natural Language Processing (NLP), yet they continue to struggle with deep semantic understanding, particularly in tasks like coreference resolution and structured semantic inference. This study presents a hybrid neuro-symbolic pipeline that combines tra...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-06-01
|
| Series: | Information |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2078-2489/16/7/529 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Large Language Models (LLMs) have transformed Natural Language Processing (NLP), yet they continue to struggle with deep semantic understanding, particularly in tasks like coreference resolution and structured semantic inference. This study presents a hybrid neuro-symbolic pipeline that combines transformer-based contextual encoding with symbolic coreference resolution and Abstract Meaning Representation (AMR) parsing to improve natural language understanding. The pipeline resolves referential ambiguity using a rule-based coreference module and generates semantic graphs from disambiguated input using a symbolic AMR parser. Experiments on public benchmark datasets—PreCo for coreference and the AMR 3.0 Public Subset for semantic parsing—demonstrate that our hybrid model consistently outperforms symbolic-only and neural-only baselines. The model achieved notable gains in F1 scores for coreference (72.4%) and Smatch scores for semantic parsing (76.5%), with marked improvements in pronoun resolution and semantic role labeling. In addition to accuracy, the pipeline offers interpretability through modular components and auditable intermediate outputs, making it suitable for high-stakes applications requiring transparency. These findings show that integrating symbolic reasoning within neural architecture offers a robust and practical path toward overcoming key limitations of current LLMs in semantic-level NLP tasks. |
|---|---|
| ISSN: | 2078-2489 |