Comparison of part-of-speech tagging systems
Jump to navigation
Jump to search
Apertium would like to have really good part-of-speech tagging, but in many cases falls below the state-of-the-art (around 97% tagging accuracy). This page intends to collect a comparison of tagging systems in Apertium and give some ideas of what could be done to improve them.
In the following table values of the form x±y are the sample mean and standard deviation of the results of 10-fold cross validation.
System | Language | ||||||
---|---|---|---|---|---|---|---|
Catalan | Spanish | Serbo-Croatian | Russian | Kazakh | Portuguese | Swedish | |
24,144 | 21,247 | 20,128 | 10,171 | 4,348 | 3,823 | 239 | |
1st | 81.66 | 86.23 | 75.22 | 75.63 | 80.79 | 61.53 | 72.90 |
CG→1st | 83.79 | 87.35 | 79.67 | 79.52 | 86.19 | 63.33 | 73.86 |
Unigram model 1 | 91.72±1.37 | 91.41±1.31 | 63.03±3.27 | ||||
CG→Unigram model 1 | 92.37±1.33 | 92.52±1.18 | 63.29±3.24 | ||||
Unigram model 2 | 91.78±1.30 | 91.03±1.25 | 63.23±3.41 | ||||
CG→Unigram model 2 | 92.06±1.30 | 91.94±1.10 | 63.16±3.17 | ||||
Unigram model 3 | 91.74±1.29 | 91.01±1.25 | 63.23±3.41 | ||||
CG→Unigram model 3 | 92.03±1.29 | 91.91±1.08 | 63.16±3.17 | ||||
Bigram (unsup, 0 iters) | 85.05±1.22 | 83.60±1.94 | 62.99±3.11 | ||||
Bigram (unsup, 50 iters) | 88.81±1.36 | 87.37±2.03 | 61.31±3.43 | ||||
Bigram (unsup, 250 iters) | 88.53±1.35 | 86.99±2.03 | 61.21±3.50 | ||||
CG→Bigram (unsup, 0 iters) | 88.96±1.21 | 87.76±1.82 | 63.01±3.23 | ||||
CG→Bigram (unsup, 50 iters) | 90.77±1.68 | 89.34±1.71 | 62.82±3.26 | ||||
CG→Bigram (unsup, 250 iters) | 90.54±1.67 | 89.33±1.71 | 62.82±3.26 | ||||
Bigram (sup) | 94.60±1.06 | 93.52±1.46 | 63.14±3.24 | ||||
CG→Bigram (sup) | 94.62±1.38 | 92.70±1.60 | 63.09±3.37 | ||||
Lwsw (0 iters) | 90.16±1.00 | 89.78±1.27 | 62.80±3.67 | ||||
Lwsw (50 iters) | 90.51±0.98 | 89.98±1.38 | 62.74±3.62 | ||||
Lwsw (250 iters) | 90.51±0.98 | 90.06±1.39 | 62.74±3.62 | ||||
CG→Lwsw (0 iters) | 90.78±1.26 | 89.61±1.43 | 62.73±3.55 | ||||
CG→Lwsw (50 iters) | 91.05±1.21 | 89.63±1.56 | 62.73±3.55 | ||||
CG→Lwsw (250 iters) | 91.06±1.25 | 89.67±1.58 | 62.73±3.55 | ||||
kaz-tagger | |||||||
CG→kaz-tagger |
In the following table, the intervals represent the [low, high] values from 10-fold cross validation.
Language | Corpus | System | |||||||
---|---|---|---|---|---|---|---|---|---|
Sent | Tok | Amb | 1st | CG+1st | Unigram | CG+Unigram | apertium-tagger | CG+apertium-tagger | |
Catalan | 1,413 | 24,144 | ? | 81.85 | 83.96 | [75.65, 78.46] | [87.76, 90.48] | [94.16, 96.28] | [93.92, 96.16] |
Spanish | 1,271 | 21,247 | ? | 86.18 | 86.71 | [78.20, 80.06] | [87.72, 90.27] | [90.15, 94.86] | [91.84, 93.70] |
Serbo-Croatian | 1,190 | 20,128 | ? | 75.22 | 79.67 | [75.36, 78.79] | [75.36, 77.28] | ||
Russian | 451 | 10,171 | ? | 75.63 | 79.52 | [70.49, 72.94] | [74.68, 78.65] | n/a | n/a |
Kazakh | 403 | 4,348 | ? | 80.79 | 86.19 | [84.36, 87.79] | [85.56, 88.72] | n/a | n/a |
Portuguese | 119 | 3,823 | ? | 72.54 | 87.34 | [77.10, 87.72] | [84.05, 91.96] | ||
Swedish | 11 | 239 | ? | 72.90 | 73.86 | [56.00, 82.97] |
Sent = sentences, Tok = tokens, Amb = average ambiguity from the morphological analyser
Systems
1st
: Selects the first analysis from the morphological analyserCG
: Uses the CG (from the monolingual language package in languages) to preprocess the input.Unigram
: Lexicalised unigram taggerapertium-tagger
: Uses the bigram HMM tagger included with Apertium.
Corpora
The tagged corpora used in the experiments are found in the monolingual packages in languages, under the texts/
subdirectory.
Todo
- Implement this tagger: https://spacy.io/blog/part-of-speech-POS-tagger-in-python