Comparison of part-of-speech tagging systems

From Apertium
Revision as of 18:14, 25 May 2016 by Frankier (talk | contribs)
Jump to navigation Jump to search

Apertium would like to have really good part-of-speech tagging, but in many cases falls below the state-of-the-art (around 97% tagging accuracy). This page intends to collect a comparison of tagging systems in Apertium and give some ideas of what could be done to improve them.

In the following table values of the form x±y are the sample mean and standard deviation of the results of 10-fold cross validation.

System Language
Catalan Spanish Serbo-Croatian Russian Kazakh Portuguese Swedish
24,144 21,247 20,128 10,171 4,348 3,823 239
1st 81.66 86.18 75.22 75.63 80.79 61.53 72.90
CG→1st 83.79 87.35 79.67 79.52 86.19 63.33 73.86
Unigram model 1 91.72±1.37 63.03±3.27
CG→Unigram model 1 92.37±1.33 92.03±1.63 63.29±3.24
Unigram model 2 91.78±1.30 63.23±3.41
CG→Unigram model 2 92.06±1.30 63.16±3.17
Unigram model 3 91.74±1.29 63.23±3.41
CG→Unigram model 3 92.03±1.29 63.16±3.17
Bigram (unsup, 0 iters) 85.05±1.22 62.99±3.11
Bigram (unsup, 50 iters) 88.81±1.36 61.31±3.43
Bigram (unsup, 250 iters) 88.53±1.35 61.21±3.50
CG→Bigram (unsup, 0 iters) 88.96±1.21 63.01±3.23
CG→Bigram (unsup, 50 iters) 90.77±1.68 89.34±1.71 62.82±3.26
CG→Bigram (unsup, 250 iters) 90.54±1.67 62.82±3.26
Bigram (sup, 0 iters) 94.60±1.06 63.14±3.24
Bigram (sup, 50 iters) 91.82±1.08 91.32±1.71 62.97±3.57
Bigram (sup, 250 iters) 91.43±1.29 62.94±3.61
CG→Bigram (sup, 0 iters) 94.62±1.38 63.09±3.37
CG→Bigram (sup, 50 iters) 92.31±1.28 91.61±1.54 63.05±3.45
CG→Bigram (sup, 250 iters) 92.02±1.43 91.55±1.54 63.02±3.48
Lwsw (0 iters) 90.16±1.00 62.80±3.67
Lwsw (50 iters) 90.51±0.98 62.74±3.62
Lwsw (250 iters) 90.51±0.98 90.06±1.39 62.74±3.62
CG→Lwsw (0 iters) 90.78±1.26 62.73±3.55
CG→Lwsw (50 iters) 91.05±1.21 62.73±3.55
CG→Lwsw (250 iters) 91.06±1.25 89.67±1.58 62.73±3.55
kaz-tagger
CG→kaz-tagger


In the following table, the intervals represent the [low, high] values from 10-fold cross validation.

Language Corpus System
Sent Tok Amb 1st CG+1st Unigram CG+Unigram apertium-tagger CG+apertium-tagger
Catalan 1,413 24,144 ? 81.85 83.96 [75.65, 78.46] [87.76, 90.48] [94.16, 96.28] [93.92, 96.16]
Spanish 1,271 21,247 ? 86.18 86.71 [78.20, 80.06] [87.72, 90.27] [90.15, 94.86] [91.84, 93.70]
Serbo-Croatian 1,190 20,128 ? 75.22 79.67 [75.36, 78.79] [75.36, 77.28]
Russian 451 10,171 ? 75.63 79.52 [70.49, 72.94] [74.68, 78.65] n/a n/a
Kazakh 403 4,348 ? 80.79 86.19 [84.36, 87.79] [85.56, 88.72] n/a n/a
Portuguese 119 3,823 ? 72.54 87.34 [77.10, 87.72] [84.05, 91.96]
Swedish 11 239 ? 72.90 73.86 [56.00, 82.97]

Sent = sentences, Tok = tokens, Amb = average ambiguity from the morphological analyser

Systems

  • 1st: Selects the first analysis from the morphological analyser
  • CG: Uses the CG (from the monolingual language package in languages) to preprocess the input.
  • Unigram: Lexicalised unigram tagger
  • apertium-tagger: Uses the bigram HMM tagger included with Apertium.

Corpora

The tagged corpora used in the experiments are found in the monolingual packages in languages, under the texts/ subdirectory.

Todo