Evalution kaz-tur Machine Translation System

From Apertium
Revision as of 10:16, 24 October 2018 by Purplemoon (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

The system has been evaluated by measuring the translation quality, the error rate of text produced by the system when comparing with postedited versions of them.

The translation quality was measured using two metrics, the first was word error rate (WER), and the second was position-independent word error rate (PER). Both metrics are based on the Levenshtein distance (Levenshtein, 1965). Metrics based on word error rate were chosen as to be able to compare the system against systems based on similar technology, and to assess the usefulness of the system in a real setting, that is of translating for dissemination.

System WER(%) PER(%)
new-system 20.87 19.98
old-system 45.77 41.69

Besides calculating WER and PER for our new apertium Kazakh-Turkish MT system, we did the same for old apertium Kazakh-Turkish MT system. The procedure was the same for both of them. We took a small (1,025 tokens) Kazakh text, which was a concatenation of several articles from Wikipedia and translated it using the two MT systems. The output of each system was postedited independently to avoid biasing in favour of one particular system. Then we calculated WER and PER for each using the apertium-eval-translator {http://wiki.apertium.org/wiki/apertium-eval-translator}.

The text files which used to calculate the evaluation was putted on github https://github.com/apertium/apertium-kaz-tur/tree/master/eval.

Because the error rate was low for the new-apertium, we chose to do a differential evaluation compared to the old-apertium, to see if in fact, the error rate was actually lower. The differential evaluation results add into apertium-kaz-tur in github https://github.com/apertium/apertium-kaz-tur/blob/master/eval/test.txt

We manually checked each of the translation output by the structural-transfer module to see if the applying rules with new apertium were better or worse than the old apertium. We found out the new apertium in 33 sentences out of 100 was better than old apertium, just in 3 sentences out of 100 the old apertium achive better than the new apertium. Also, in 56 out of 100 sentences both systems got the right translation, and in 7 out of 100 both of them had bad translation.