Comparison of part-of-speech tagging systems
Jump to navigation
Jump to search
Apertium would like to have really good part-of-speech tagging, but in many cases falls below the state-of-the-art (around 97% tagging accuracy). This page intends to collect a comparison of tagging systems in Apertium and give some ideas of what could be done to improve them.
In the following table values of the form x±y are the sample mean and standard deviation of the results of 10-fold cross validation.
System | Language | ||||||
---|---|---|---|---|---|---|---|
Catalan | Spanish | Serbo-Croatian | Russian | Kazakh | Portuguese | Swedish | |
23,673 | 20,487 | 20,128 | 10,171 | 4,348 | 5,718 | 239 | |
1st | 81.66 | 86.23 | 75.22 | 75.63 | 80.79 | 66.58 | 72.90 |
CG→1st | 83.79 | 87.35 | 79.67 | 79.52 | 86.19 | 77.51 | 73.86 |
Unigram model 1 | 91.72±1.37 | 91.41±1.31 | |||||
CG→Unigram model 1 | 92.37±1.33 | 92.52±1.18 | |||||
Unigram model 2 | 91.78±1.30 | 91.03±1.25 | 77.35±5.20 | ||||
CG→Unigram model 2 | 92.06±1.30 | 91.94±1.10 | 79.19±5.66 | ||||
Unigram model 3 | 91.74±1.29 | 91.01±1.25 | |||||
CG→Unigram model 3 | 92.03±1.29 | 91.91±1.08 | 79.19±5.66 | ||||
Bigram (unsup, 0 iters) | 85.05±1.22 | 83.60±1.94 | 71.28±3.75 | ||||
Bigram (unsup, 50 iters) | 88.81±1.36 | 87.37±2.03 | |||||
Bigram (unsup, 250 iters) | 88.53±1.35 | 86.99±2.03 | |||||
CG→Bigram (unsup, 0 iters) | 88.96±1.21 | 87.76±1.82 | |||||
CG→Bigram (unsup, 50 iters) | 90.77±1.68 | 89.34±1.71 | |||||
CG→Bigram (unsup, 250 iters) | 90.54±1.67 | 89.33±1.71 | |||||
Bigram (sup) | 94.60±1.06 | 93.52±1.46 | |||||
CG→Bigram (sup) | 94.62±1.38 | 92.70±1.60 | |||||
Lwsw (0 iters) | 90.16±1.00 | 89.78±1.27 | 73.13±3.87 | ||||
Lwsw (50 iters) | 90.51±0.98 | 89.98±1.38 | 72.90±3.97 | ||||
Lwsw (250 iters) | 90.51±0.98 | 90.06±1.39 | 72.87±4.09 | ||||
CG→Lwsw (0 iters) | 90.78±1.26 | 89.61±1.43 | 77.20±5.11 | ||||
CG→Lwsw (50 iters) | 91.05±1.21 | 89.63±1.56 | |||||
CG→Lwsw (250 iters) | 91.06±1.25 | 89.67±1.58 | |||||
kaz-tagger | |||||||
CG→kaz-tagger |
In the following table, the intervals represent the [low, high] values from 10-fold cross validation.
Language | Corpus | System | |||||||
---|---|---|---|---|---|---|---|---|---|
Sent | Tok | Amb | 1st | CG+1st | Unigram | CG+Unigram | apertium-tagger | CG+apertium-tagger | |
Catalan | 1,413 | 24,144 | ? | 81.85 | 83.96 | [75.65, 78.46] | [87.76, 90.48] | [94.16, 96.28] | [93.92, 96.16] |
Spanish | 1,271 | 21,247 | ? | 86.18 | 86.71 | [78.20, 80.06] | [87.72, 90.27] | [90.15, 94.86] | [91.84, 93.70] |
Serbo-Croatian | 1,190 | 20,128 | ? | 75.22 | 79.67 | [75.36, 78.79] | [75.36, 77.28] | ||
Russian | 451 | 10,171 | ? | 75.63 | 79.52 | [70.49, 72.94] | [74.68, 78.65] | n/a | n/a |
Kazakh | 403 | 4,348 | ? | 80.79 | 86.19 | [84.36, 87.79] | [85.56, 88.72] | n/a | n/a |
Portuguese | 119 | 3,823 | ? | 72.54 | 87.34 | [77.10, 87.72] | [84.05, 91.96] | ||
Swedish | 11 | 239 | ? | 72.90 | 73.86 | [56.00, 82.97] |
Sent = sentences, Tok = tokens, Amb = average ambiguity from the morphological analyser
Systems
1st
: Selects the first analysis from the morphological analyserCG
: Uses the CG (from the monolingual language package in languages) to preprocess the input.Unigram
: Lexicalised unigram taggerapertium-tagger
: Uses the bigram HMM tagger included with Apertium.
Corpora
The tagged corpora used in the experiments are found in the monolingual packages in languages, under the texts/
subdirectory.
Todo
- Implement this tagger: https://spacy.io/blog/part-of-speech-POS-tagger-in-python