Apertium has moved from SourceForge to GitHub.
If you have any questions, please come and talk to us on #apertium on irc.freenode.net or contact the GitHub migration team.

Meta-evaluation

From Apertium
(Difference between revisions)
Jump to: navigation, search
(Translation cleanliness)
Line 10: Line 10:
 
** Monolingual naïve coverage
 
** Monolingual naïve coverage
 
** Trimmed naïve coverage (i.e., using a trimmed dictionary)
 
** Trimmed naïve coverage (i.e., using a trimmed dictionary)
* Accuracy of analyses
+
* Accuracy of analyser
 
** Precision/Recall/F-score
 
** Precision/Recall/F-score
 
* Accuracy of translation
 
* Accuracy of translation
Line 29: Line 29:
 
== Translation accuracy ==
 
== Translation accuracy ==
 
[[apertium-eval-translator.pl]] and [[apertium-eval-translator-line.pl]] work well but are a bit old, and could probably benefit from being rewritten in python
 
[[apertium-eval-translator.pl]] and [[apertium-eval-translator-line.pl]] work well but are a bit old, and could probably benefit from being rewritten in python
 
   
 
== Translation cleanliness ==
 
== Translation cleanliness ==
Line 37: Line 36:
 
* prefixed morphology expansion testvoc ("testvoc lite")
 
* prefixed morphology expansion testvoc ("testvoc lite")
 
* corpus testvoc
 
* corpus testvoc
  +
  +
[[Category:Evaluation]]
  +
[[Category:Quality control]]

Revision as of 20:28, 31 May 2019

Apertium language modules and translation pairs are subject to the following types of evaluation:

  • Morphology coverage / regression testing
  • Size of system
    • Number of stems in lexc, monodix, bidix
    • Number of disambiguation rules
    • Number of lexical selection rules
    • Number of transfer rules
  • Naïve coverage
    • Monolingual naïve coverage
    • Trimmed naïve coverage (i.e., using a trimmed dictionary)
  • Accuracy of analyser
    • Precision/Recall/F-score
  • Accuracy of translation
    • WER/PER/BLEU
  • Cleanliness of translation output
    • Testvoc

Contents

Morphology coverage

The tools we have for this are aq-morftest from Apertium quality and morph-test.py.

Two complaints: they don't support directionality restrictions on tests, and they don't return error codes.

Naïve coverage

In theory, aq-covtest does this, but mostly people write their own scripts.

A good generalised script that supports hfst and lttoolbox binaries and arbitrary corpora would be good. It should also (optionally) output hitparades (e.g., frequency lists of unknown forms in the corpus).

Translation accuracy

apertium-eval-translator.pl and apertium-eval-translator-line.pl work well but are a bit old, and could probably benefit from being rewritten in python

Translation cleanliness

There are several ways to test translation cleanliness that are good for different purposes:

  • morphology expansion testvoc ("standard testvoc")
  • prefixed morphology expansion testvoc ("testvoc lite")
  • corpus testvoc
Personal tools