Difference between revisions of "Evaluation"
(26 intermediate revisions by 6 users not shown) | |||
Line 1: | Line 1: | ||
'''Evaluation''' can give you some idea as to how well a language pair works in practice. There are many ways to evaluate, and the test chosen should depend on the [[Assimilation and dissemination|intended use of the language pair]]. |
|||
Evaluation can give you some idea as to how well a language pair works in practice, eg. how many words need to be changed before a text is publication-ready (Word-Error Rate, see [http://en.wikipedia.org/wiki/Word_error_rate Wikipedia on WER]), what the [[N-gram]] difference is between MT output and one or more reference translations (see [http://en.wikipedia.org/wiki/BLEU Wikipedia on Bleu] or [http://en.wikipedia.org/wiki/NIST_%28metric%29 NIST]) or how well a user ''understands'' the message of the original text (this typically requires an experiment with real human subjects). |
|||
Most evaluations focus on numerical metrics like WER, which make the most sense when done on fresh post-edits. WER gives less information when run on pretranslated text (due to multiple possible translations, and MT shaping the translation output). WER is quite far from the task of ''gisting'', where cloze-like tests may be a more useful metric, and interviews may give more useful information. Beware of overfitting and [https://www.nngroup.com/articles/campbells-law/ Campbell's law]. |
|||
Common evaluation measures and methods are: |
|||
* how many words need to be changed before a text is publication-ready (Word-Error Rate, see [http://en.wikipedia.org/wiki/Word_error_rate Wikipedia on WER]), here '''lower scores are better''' |
|||
* how many word [[N-gram]]'s are common to the MT output and one or more reference translations (see [http://en.wikipedia.org/wiki/BLEU Wikipedia on Bleu], [https://en.wikipedia.org/wiki/METEOR Meteor] or [http://en.wikipedia.org/wiki/NIST_%28metric%29 NIST]), here '''higher scores are better''' |
|||
* how many character [[N-gram]]'s are common to MT output and a post-edit (the Fuzzy Match score, an unordered comparison with the Sørensen–Dice coefficient).[http://amtaweb.org/wp-content/uploads/2015/10/MTSummitXV_ResearchTrack.pdf#page=138] |
|||
** or the [https://aclanthology.org/W15-3049/ Character N-gram F-score] (code at https://github.com/Waino/chrF) |
|||
* how well a user ''understands'' the message of the original text (this typically requires an experiment with real human subjects, see [[Assimilation Evaluation Toolkit]] which lets you make gap-filling tests). |
|||
* user-interviews to find the subjective experience of using the translator for their task (whether post-editing or gisting) |
|||
Most released language pairs have had some evaluation, see [[Quality]] for a per-pair summary. |
|||
{{TOCD}} |
{{TOCD}} |
||
==Using apertium-eval-translator for WER and PER== |
==Using apertium-eval-translator for WER and PER== |
||
[https:// |
[https://github.com/apertium/apertium-eval-translator apertium-eval-translator.pl] is a script written in that calculates the word error rate (WER) and the position-independent word error rate (PER) between a translation performed by an Apertium-based MT system and its human-corrected translation at document level. Although it has been designed to evaluate Apertium-based systems, it can be easily adapted to evaluate other MT systems. |
||
To use it, first translate a text with apertium, save that into <code>MT.txt</code>, then manually post-edit that so it looks understandable and grammatical (but trying to avoid major rewrites), save that into <code>postedit.txt</code>. Then run <code>apertium-eval-translator -test MT.txt -ref postedit.txt</code> and you'll see a bunch of numbers indicating how good the translation was, for post-editing. |
|||
If your text is fairly long (>10k words), the full WER calculation is quite slow. You can speed it up with the -b/-beam option, which will make WER only take N words of context into account. But be sure to make N large enough, otherwise you may get artificially low/high WER. As an example, a 17k word text that took nearly an hour to get the full WER took a few seconds with -b 150 and gave the same result (19.69%), but with -b 5 it gave 73.54% and -b 15 it gave 7.85 %. So if you don't have time to do a full WER without -beam, there is a wrapper that will increase the beam context N until it seems to stabilise: <code>beam-eval-until-stable -t testfile -r reffile</code>. |
|||
=== |
===Detailed usage=== |
||
<pre> |
<pre> |
||
apertium-eval-translator -test testfile -ref reffile [-beam <n>] |
apertium-eval-translator -test testfile -ref reffile [-beam <n>] |
||
Line 16: | Line 37: | ||
-ref|-r Specify the file with the reference translation |
-ref|-r Specify the file with the reference translation |
||
-beam|-b Perform a beam search by looking only to the <n> previous |
-beam|-b Perform a beam search by looking only to the <n> previous |
||
and <n> posterior |
and <n> posterior neighboring words (optional parameter |
||
to make the evaluation much faster) |
to make the evaluation much faster) |
||
-help|-h Show this help message |
-help|-h Show this help message |
||
Line 40: | Line 61: | ||
See [[English and Esperanto/Evaluation]] for an example. In [[Northern Sámi and Norwegian]] there is a [http://apertium.svn.sourceforge.net/viewvc/apertium/incubator/apertium-sme-nob/WER/ Makefile] to translate a set of source-language files and then run the evaluation on them. |
See [[English and Esperanto/Evaluation]] for an example. In [[Northern Sámi and Norwegian]] there is a [http://apertium.svn.sourceforge.net/viewvc/apertium/incubator/apertium-sme-nob/WER/ Makefile] to translate a set of source-language files and then run the evaluation on them. |
||
===dwdiff=== |
|||
If you just need a quick-and-dirty PER (position-independent WER) test, you can use <code>dwdiff -s reference.txt MT_output.txt</code> and look for % changed. |
|||
==Pair bootstrap resampling== |
|||
===Detailed usage=== |
|||
<pre> |
|||
bootstrap_resampling.pl -source srcfile -test testfile -ref |
|||
reffile -times <n> -eval /full/path/to/eval/script |
|||
Options: |
|||
-source|-s Specify the file with the source file |
|||
-test|-t Specify the file with the translations to evaluate |
|||
-ref|-r Specify the file with the reference translations |
|||
-times|-n Specify how many times the resampling should be done |
|||
-eval|-e Specify the full path to the MT evaluation script |
|||
-help|-h Show this help message |
|||
Note: Reference translation MUST have no unknown-word marks, even if |
|||
they are free rides. |
|||
</pre> |
|||
==Evaluating with Wikipedia== |
==Evaluating with Wikipedia== |
||
Line 45: | Line 90: | ||
==See also== |
==See also== |
||
* [[Assimilation Evaluation Toolkit]] / [[Ideas for Google Summer of Code/Apertium assimilation evaluation toolkit]] |
|||
* [[Regression testing]] |
|||
* [[Apertium-regtest]] |
|||
* [[Quality control]] |
|||
* [[Calculating coverage]] |
|||
==External links== |
|||
* [http://en.wikipedia.org/wiki/Evaluation_of_machine_translation Wikipedia on Evaluation of MT] (by [[User:Francis_Tyers|Francis Tyers]]) |
|||
<references/> |
|||
[[Category:Evaluation]] |
[[Category:Evaluation]] |
||
[[Category:Quality control]] |
[[Category:Quality control]] |
||
[[Category:Documentation in English]] |
Latest revision as of 10:23, 3 September 2024
Evaluation can give you some idea as to how well a language pair works in practice. There are many ways to evaluate, and the test chosen should depend on the intended use of the language pair.
Most evaluations focus on numerical metrics like WER, which make the most sense when done on fresh post-edits. WER gives less information when run on pretranslated text (due to multiple possible translations, and MT shaping the translation output). WER is quite far from the task of gisting, where cloze-like tests may be a more useful metric, and interviews may give more useful information. Beware of overfitting and Campbell's law.
Common evaluation measures and methods are:
- how many words need to be changed before a text is publication-ready (Word-Error Rate, see Wikipedia on WER), here lower scores are better
- how many word N-gram's are common to the MT output and one or more reference translations (see Wikipedia on Bleu, Meteor or NIST), here higher scores are better
- how many character N-gram's are common to MT output and a post-edit (the Fuzzy Match score, an unordered comparison with the Sørensen–Dice coefficient).[1]
- or the Character N-gram F-score (code at https://github.com/Waino/chrF)
- how well a user understands the message of the original text (this typically requires an experiment with real human subjects, see Assimilation Evaluation Toolkit which lets you make gap-filling tests).
- user-interviews to find the subjective experience of using the translator for their task (whether post-editing or gisting)
Most released language pairs have had some evaluation, see Quality for a per-pair summary.
Using apertium-eval-translator for WER and PER[edit]
apertium-eval-translator.pl is a script written in that calculates the word error rate (WER) and the position-independent word error rate (PER) between a translation performed by an Apertium-based MT system and its human-corrected translation at document level. Although it has been designed to evaluate Apertium-based systems, it can be easily adapted to evaluate other MT systems.
To use it, first translate a text with apertium, save that into MT.txt
, then manually post-edit that so it looks understandable and grammatical (but trying to avoid major rewrites), save that into postedit.txt
. Then run apertium-eval-translator -test MT.txt -ref postedit.txt
and you'll see a bunch of numbers indicating how good the translation was, for post-editing.
If your text is fairly long (>10k words), the full WER calculation is quite slow. You can speed it up with the -b/-beam option, which will make WER only take N words of context into account. But be sure to make N large enough, otherwise you may get artificially low/high WER. As an example, a 17k word text that took nearly an hour to get the full WER took a few seconds with -b 150 and gave the same result (19.69%), but with -b 5 it gave 73.54% and -b 15 it gave 7.85 %. So if you don't have time to do a full WER without -beam, there is a wrapper that will increase the beam context N until it seems to stabilise: beam-eval-until-stable -t testfile -r reffile
.
Detailed usage[edit]
apertium-eval-translator -test testfile -ref reffile [-beam <n>] Options: -test|-t Specify the file with the translation to evaluate -ref|-r Specify the file with the reference translation -beam|-b Perform a beam search by looking only to the <n> previous and <n> posterior neighboring words (optional parameter to make the evaluation much faster) -help|-h Show this help message -version|-v Show version information and exit Note: The <n> value provided with -beam is language-pair dependent. The closer the languages involved are, the lesser <n> can be without affecting the evaluation results. This parameter only affects the WER evaluation. Note: Reference translation MUST have no unknown-word marks, even if they are free rides. This software calculates (at document level) the word error rate (WER) and the postion-independent word error rate (PER) between a translation performed by the Apertium MT system and a reference translation obtained by post-editing the system ouput. It is assumed that unknow words are marked with a start (*), as Apertium does; nevertheless, it can be easily adapted to evaluate other MT systems that do not mark unknown words with a star.
See English and Esperanto/Evaluation for an example. In Northern Sámi and Norwegian there is a Makefile to translate a set of source-language files and then run the evaluation on them.
dwdiff[edit]
If you just need a quick-and-dirty PER (position-independent WER) test, you can use dwdiff -s reference.txt MT_output.txt
and look for % changed.
Pair bootstrap resampling[edit]
Detailed usage[edit]
bootstrap_resampling.pl -source srcfile -test testfile -ref reffile -times <n> -eval /full/path/to/eval/script Options: -source|-s Specify the file with the source file -test|-t Specify the file with the translations to evaluate -ref|-r Specify the file with the reference translations -times|-n Specify how many times the resampling should be done -eval|-e Specify the full path to the MT evaluation script -help|-h Show this help message Note: Reference translation MUST have no unknown-word marks, even if they are free rides.
Evaluating with Wikipedia[edit]
- Main article: Evaluating with Wikipedia
See also[edit]
- Assimilation Evaluation Toolkit / Ideas for Google Summer of Code/Apertium assimilation evaluation toolkit
- Apertium-regtest
- Quality control
- Calculating coverage
External links[edit]