Ranking Assisted Unsupervised Morphological Disambiguation of Turkish
In comparison to English, Turkish is an agglutinative language with fewer resources. The agglutinative properties of words result in a significant number of morphological analyses, creating uncertainty in morphological disambiguation and syntactic parsing. Traditional approaches typically rely on su...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10908819/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | In comparison to English, Turkish is an agglutinative language with fewer resources. The agglutinative properties of words result in a significant number of morphological analyses, creating uncertainty in morphological disambiguation and syntactic parsing. Traditional approaches typically rely on supervised learning models based on the correct morphological analysis of a given phrase. In this study, we propose a ranking method to limit and filter out irrelevant morphological tags from all possible combinations of morphological analyses of a given sentence without supervision. The suggested method selects less ambiguous analyses for statistical aggregation and applies inference through the PageRank algorithm on a densely connected graph. Subsequently, this graph is utilized to develop a voting schema for each test word based on the connections in the test sentence. Experimental evaluations of the proposed methods on three independently and manually annotated test datasets indicate a token accuracy of approximately 80% and an accuracy of around 61% for ambiguous tokens. In all ranking evaluations, the best scores from the PageRank variations significantly outperform those of Self-Attention LSTM and ELMO deep learning models. The training process of PageRank is notably straightforward and efficient, requiring <inline-formula> <tex-math notation="LaTeX">$O(n^{2})$ </tex-math></inline-formula> parameter adjustments, which is considerably fewer than those required by the backpropagation method used in neural network training. Furthermore, to reduce ambiguity in sentences from different genres with scarce samples, the proposed method is easily adaptable. |
|---|---|
| ISSN: | 2169-3536 |