Difference between revisions of "Apertium-uzb-kaa"
Jump to navigation
Jump to search
(update apertium-uzb stats) |
(update kaa stats) |
||
Line 30: | Line 30: | ||
* udhr(34.21) |
* udhr(34.21) |
||
* bible(78.80) |
* bible(78.80) |
||
| |
|||
* jam(95.37) |
|||
* bible(68.02) |
|||
| |
| |
||
* jam(17.7, 17.7, 14.8) |
* jam(17.7, 17.7, 14.8) |
||
Line 44: | Line 47: | ||
* jam(106.15, 102.91) |
* jam(106.15, 102.91) |
||
* bible() |
* bible() |
||
| || |
|||
|- |
|- |
||
| 1 |
| 1 |
||
Line 51: | Line 54: | ||
| ✘ |
| ✘ |
||
| |
| |
||
* (kaa) |
* (kaa) '''7115''' |
||
* (uzb) '''5934''' |
* (uzb) '''5934''' |
||
* (uzb-kaa) 412 |
* (uzb-kaa) 412 |
||
| |
| |
||
* jam(79.61 |
* jam(79.61) |
||
* udhr(34.21 |
* udhr(34.21) |
||
* bible(79.58 |
* bible(79.58) |
||
| |
|||
⚫ | |||
* jam(95.37) |
|||
* bible(68.08) |
|||
⚫ | |||
|- |
|- |
||
|- |
|- |
Revision as of 17:43, 4 June 2019
Work plan
Week | Dates | Stems goal | Achieved? | Stems | Cov uzb | Cov kaa | Corpus testvoc uzb→kaa (no *, no */@, no */@/# errors) |
Corpus testvoc kaa→uzb (no *, no */@, no */@/# errors) |
WER,PER uzb→kaa | WER,PER kaa→uzb | Evaluation | Notes |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 20/05—26/05 | -- | -- |
|
|
|
|
|
|
|
||
1 | 27/05—02/06 | 5000 from rus-{uzb,kaa} to each monodix |
✘ |
|
|
|
uzb: 6c3887d kaa: fbb4425 | |||||
2 | 03/06—09/06 | 5000 from rus-{uzb,kaa} to each monodix |
||||||||||
3 | 10/06—16/06 | 5000 from rus-{uzb,kaa} to each monodix |
||||||||||
4 | 17/06—23/06 | 5000 from rus-{uzb,kaa} to each monodix |
Comments
- Bidix: number of stems in the bilingual dictionary.
- Number of stems is taken from the header "aperium-dixtools format-1line 10 55 apertium-uzb-kaa.uzb-kaa.dix" produces.
- Corpus testvoc: cat <txt> | apertium-uzb-kaa/testvoc/corpus/trimmed-coverage.sh {uzb-kaa,kaa-uzb}. Corpora can be found here.
- WER is calculated like shown below. Currently WER numbers are only a crude approximation, since input text and -ref text are probably independent translations (not parallel).
- Evaluation is taking words and performing an evaluation for post-edition word error rate (WER). The output for those words should be clean.
Related terms:
- Trimmed coverage means the coverage of both of the morphological analysers after being trimmed according to the bilingual dictionary of the pair, that is, only containing stems which are also in the bilingual dictionary.
- Testvoc for a category means that the category is testvoc clean, in both translation directions.
apertium-uzb-kaa$ perl ../../../../../../src/sourceforge-apertium/trunk/apertium-eval-translator/apertium-eval-translator.pl -test <(cat ../../../data4apertium/corpora/jam/uzb.txt | apertium -d . uzb-kaa) -ref ../../../data4apertium/corpora/jam/kaa.txt Test file: '/dev/fd/63' Reference file '../../../data4apertium/corpora/jam/kaa.txt' Statistics about input files ------------------------------------------------------- Number of words in reference: 341 Number of words in test: 309 Number of unknown words (marked with a star) in test: 284 Percentage of unknown words: 91.91 % Results when removing unknown-word marks (stars) ------------------------------------------------------- Edit distance: 325 Word error rate (WER): 95.31 % Number of position-independent correct words: 26 Position-independent word error rate (PER): 92.38 % Results when unknown-word marks (stars) are not removed ------------------------------------------------------- Edit distance: 328 Word Error Rate (WER): 96.19 % Number of position-independent correct words: 21 Position-independent word error rate (PER): 93.84 % Statistics about the translation of unknown words ------------------------------------------------------- Number of unknown words which were free rides: 3 Percentage of unknown words that were free rides: 1.06 %