Apertium has moved from SourceForge to GitHub.
If you have any questions, please come and talk to us on #apertium on irc.freenode.net or contact the GitHub migration team.

Comparison of part-of-speech tagging systems

From Apertium
(Difference between revisions)
Jump to: navigation, search
Line 12: Line 12:
 
! <small>24,144</small> !! <small>21,247</small> !! <small>20,128</small> !! <small>10,171</small> !! <small>4,348</small> !! <small>3,823 </small> !! <small>239</small>
 
! <small>24,144</small> !! <small>21,247</small> !! <small>20,128</small> !! <small>10,171</small> !! <small>4,348</small> !! <small>3,823 </small> !! <small>239</small>
 
|-
 
|-
| '''1st''' ||align=right| 81.85 ||align=right| 86.18 ||align=right| 75.22 ||align=right| 75.63 ||align=right| 80.79||align=right| 72.54||align=right| 72.90
+
| '''1st''' ||align=right| 81.66 ||align=right| 86.18 ||align=right| 75.22 ||align=right| 75.63 ||align=right| 80.79||align=right| 72.54||align=right| 72.90
 
|-
 
|-
| '''CG→1st''' ||align=right| 83.96 ||align=right| 86.71 ||align=right| 79.67 ||align=right| 79.52 ||align=right| 86.19 ||align=right| 87.34 ||align=right| 73.86
+
| '''CG→1st''' ||align=right| 83.79 ||align=right| 86.71 ||align=right| 79.67 ||align=right| 79.52 ||align=right| 86.19 ||align=right| 87.34 ||align=right| 73.86
 
|-
 
|-
| '''Unigram''' ||
+
| '''Unigram model 1''' ||align=right| [89.62, 91.72±1.37, 93.60]
 
|-
 
|-
| '''CG→Unigram''' ||
+
| '''CG→Unigram model 1''' ||align=right| [90.02, 92.37±1.33, 94.26]
 
|-
 
|-
| '''apertium-tagger''' (unsup) ||
+
| '''Unigram model 2''' ||align=right| [89.42, 91.78±1.30, 93.39]
 
|-
 
|-
| '''CG→apertium-tagger''' (unsup) ||
+
| '''CG→Unigram model 2''' ||align=right| [89.62, 92.06±1.30, 93.73]
 
|-
 
|-
| '''apertium-tagger''' (sup) ||
+
| '''Unigram model 3''' ||align=right| [89.42, 91.74±1.29, 93.39]
 
|-
 
|-
| '''CG→apertium-tagger''' (sup) ||
+
| '''CG→Unigram model 3''' ||align=right| [89.62, 92.03±1.29, 93.66]
 
|-
 
|-
| '''CG→kaz-tagger''' ||
+
| '''Bigram (unsup, 0 iters)''' ||align=right| [82.63, 85.05±1.22, 86.71]
 
|-
 
|-
  +
| '''Bigram (unsup, 50 iters)''' ||align=right| [86.48, 88.81±1.36, 90.72]
  +
|-
  +
| '''Bigram (unsup, 250 iters)''' ||align=right| [86.08, 88.53±1.35, 90.26]
  +
|-
  +
| '''CG→Bigram (unsup, 0 iters)''' ||align=right| [86.92, 88.96±1.21, 90.95]
  +
|-
  +
| '''CG→Bigram (unsup, 50 iters)''' ||align=right| [87.05, 90.77±1.68, 92.34]
  +
|-
  +
| '''CG→Bigram (unsup, 250 iters)''' ||align=right| [86.85, 90.54±1.67, 92.41]
  +
|-
  +
| '''Bigram (sup, 0 iters)''' ||align=right| [92.16, 94.60±1.06, 95.87]
  +
|-
  +
| '''Bigram (sup, 50 iters)''' ||align=right| [89.73, 91.82±1.08, 93.59]
  +
|-
  +
| '''Bigram (sup, 250 iters)''' ||align=right| [88.71, 91.43±1.29, 93.45]
  +
|-
  +
| '''CG→Bigram (sup, 0 iters)''' ||align=right| [91.23, 94.62±1.38, 95.99]
  +
|-
  +
| '''CG→Bigram (sup, 50 iters)''' ||align=right| [89.41, 92.31±1.28, 94.06]
  +
|-
  +
| '''CG→Bigram (sup, 250 iters)''' ||align=right| [88.67, 92.02±1.43, 93.86]
  +
|-
  +
| '''Lwsw (unsup, 0 iters)''' ||align=right| [88.31, 90.16±1.00, 91.80]
  +
|-
  +
| '''Lwsw (unsup, 50 iters)''' ||align=right| [89.11, 90.51±0.98, 92.33]
  +
|-
  +
| '''Lwsw (unsup, 250 iters)''' ||align=right| [88.99, 90.51±0.98, 92.33]
  +
|-
  +
| '''CG→Lwsw (unsup, 0 iters)''' ||align=right| [88.33, 90.78±1.26, 92.93]
  +
|-
  +
| '''CG→Lwsw (unsup, 50 iters)''' ||align=right| [89.21, 91.05±1.21, 93.33]
  +
|-
  +
| '''CG→Lwsw (unsup, 250 iters)''' ||align=right| [89.35, 91.06±1.25, 93.40]
  +
|-
  +
| '''kaz-tagger''' ||
  +
|-
  +
| '''CG→kaz-tagger''' ||
 
|}
 
|}
 
 
{|class=wikitable
 
{|class=wikitable
 
!rowspan=2|Language !!colspan=3|Corpus !!colspan=6|System
 
!rowspan=2|Language !!colspan=3|Corpus !!colspan=6|System

Revision as of 18:55, 24 May 2016

Contents

Apertium would like to have really good part-of-speech tagging, but in many cases falls below the state-of-the-art (around 97% tagging accuracy). This page intends to collect a comparison of tagging systems in Apertium and give some ideas of what could be done to improve them.

In the following table, the intervals represent the [low, high] values from 10-fold cross validation.

System Language
Catalan Spanish Serbo-Croatian Russian Kazakh Portuguese Swedish
24,144 21,247 20,128 10,171 4,348 3,823 239
1st 81.66 86.18 75.22 75.63 80.79 72.54 72.90
CG→1st 83.79 86.71 79.67 79.52 86.19 87.34 73.86
Unigram model 1 [89.62, 91.72±1.37, 93.60]
CG→Unigram model 1 [90.02, 92.37±1.33, 94.26]
Unigram model 2 [89.42, 91.78±1.30, 93.39]
CG→Unigram model 2 [89.62, 92.06±1.30, 93.73]
Unigram model 3 [89.42, 91.74±1.29, 93.39]
CG→Unigram model 3 [89.62, 92.03±1.29, 93.66]
Bigram (unsup, 0 iters) [82.63, 85.05±1.22, 86.71]
Bigram (unsup, 50 iters) [86.48, 88.81±1.36, 90.72]
Bigram (unsup, 250 iters) [86.08, 88.53±1.35, 90.26]
CG→Bigram (unsup, 0 iters) [86.92, 88.96±1.21, 90.95]
CG→Bigram (unsup, 50 iters) [87.05, 90.77±1.68, 92.34]
CG→Bigram (unsup, 250 iters) [86.85, 90.54±1.67, 92.41]
Bigram (sup, 0 iters) [92.16, 94.60±1.06, 95.87]
Bigram (sup, 50 iters) [89.73, 91.82±1.08, 93.59]
Bigram (sup, 250 iters) [88.71, 91.43±1.29, 93.45]
CG→Bigram (sup, 0 iters) [91.23, 94.62±1.38, 95.99]
CG→Bigram (sup, 50 iters) [89.41, 92.31±1.28, 94.06]
CG→Bigram (sup, 250 iters) [88.67, 92.02±1.43, 93.86]
Lwsw (unsup, 0 iters) [88.31, 90.16±1.00, 91.80]
Lwsw (unsup, 50 iters) [89.11, 90.51±0.98, 92.33]
Lwsw (unsup, 250 iters) [88.99, 90.51±0.98, 92.33]
CG→Lwsw (unsup, 0 iters) [88.33, 90.78±1.26, 92.93]
CG→Lwsw (unsup, 50 iters) [89.21, 91.05±1.21, 93.33]
CG→Lwsw (unsup, 250 iters) [89.35, 91.06±1.25, 93.40]
kaz-tagger
CG→kaz-tagger
Language Corpus System
Sent Tok Amb 1st CG+1st Unigram CG+Unigram apertium-tagger CG+apertium-tagger
Catalan 1,413 24,144  ? 81.85 83.96 [75.65, 78.46] [87.76, 90.48] [94.16, 96.28] [93.92, 96.16]
Spanish 1,271 21,247  ? 86.18 86.71 [78.20, 80.06] [87.72, 90.27] [90.15, 94.86] [91.84, 93.70]
Serbo-Croatian 1,190 20,128  ? 75.22 79.67 [75.36, 78.79] [75.36, 77.28]
Russian 451 10,171  ? 75.63 79.52 [70.49, 72.94] [74.68, 78.65] n/a n/a
Kazakh 403 4,348  ? 80.79 86.19 [84.36, 87.79] [85.56, 88.72] n/a n/a
Portuguese 119 3,823  ? 72.54 87.34 [77.10, 87.72] [84.05, 91.96]
Swedish 11 239  ? 72.90 73.86 [56.00, 82.97]

Sent = sentences, Tok = tokens, Amb = average ambiguity from the morphological analyser

Systems

  • 1st: Selects the first analysis from the morphological analyser
  • CG: Uses the CG (from the monolingual language package in languages) to preprocess the input.
  • Unigram: Lexicalised unigram tagger
  • apertium-tagger: Uses the bigram HMM tagger included with Apertium.

Corpora

The tagged corpora used in the experiments are found in the monolingual packages in languages, under the texts/ subdirectory.

Todo

Personal tools