Difference between revisions of "Kazakh and Tatar/Work plan"

From Apertium
Jump to navigation Jump to search
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
This is a workplan for development efforts for [[Kazakh and Tatar]] translator in [[Google Summer of Code]] 2012.
This was a workplan for development efforts for [[Kazakh and Tatar]] translator in [[Google Summer of Code]] 2012.


* Trimmed coverage means the coverage of both of the morphological analysers after being trimmed according to the bilingual dictionary of the pair, that is, only containing stems which are also in the bilingual dictionary.
* Trimmed coverage means the coverage of both of the morphological analysers after being trimmed according to the bilingual dictionary of the pair, that is, only containing stems which are also in the bilingual dictionary.
Line 5: Line 5:
* Evaluation is taking <math>n</math> words and performing an [[evaluation]] for post-edition word error rate (WER). The output for those <math>n</math> words should be clean.
* Evaluation is taking <math>n</math> words and performing an [[evaluation]] for post-edition word error rate (WER). The output for those <math>n</math> words should be clean.


("news" is <code>tat.news.2005-2011_300K-sentences.txt.bz2</code>, "wp" - <code>kaz.fixed.wikipedia.2010_100k-sentences.txt.bz2</code> <ref>Despite the fact that both lexicons have the same amount of stems, transducer for Tatar usually has a lower coverage. This is because some forms aren't implemented yet, e.g. some verbal forms. Here I gave numbers for both directions, although the domains are different. Not too sure about it, hope this is ok.</ref>)
("news" is <code>tat.news.2005-2011_300K-sentences.txt.bz2</code>, "wp" - <code>kaz.fixed.wikipedia.2010_100k-sentences.txt.bz2</code> <ref>Despite the fact that both lexicons have the same amount of stems, transducer for Tatar usually has a lower coverage. This is probably because some forms aren't implemented yet, e.g. some verbal forms. Here I give numbers for both directions. Note that domains are different.</ref>)


{|class=wikitable
{|class=wikitable
Line 35: Line 35:
| 11 || <s>30/07&mdash;05/08</s> || 80% || 79.9%,&nbsp;77.4% || - || || - || || Work on disambiguation.
| 11 || <s>30/07&mdash;05/08</s> || 80% || 79.9%,&nbsp;77.4% || - || || - || || Work on disambiguation.
|-
|-
| 12 || 06/08&mdash;12/08 || 80% || || ''all categories clean'' || || 500 words || || '''[http://wiki.apertium.org/wiki/Kazakh_and_Tatar/Evaluation Final evaluation]'''. Tidying up, releasing
| 12 || <s>06/08&mdash;12/08</s> || 80% || || ''all categories clean'' || || 500 words || || '''Final evaluation''' (see below). Tidying up, releasing
|-
|-
|}
|}

== Final evaluation ==

Sources of texts can be find here: [https://svn.code.sf.net/p/apertium/svn/trunk/apertium-kaz-tat/texts/README README]

=== Azattyq news ===

<pre>
selimcan@achilles:~/src/apertium/incubator/apertium-kaz-tat$ perl ../../trunk/apertium-eval-translator/apertium-eval-translator.pl -test texts/4story.kaz-tat.02_09_2012.txt -ref texts/4story.kaz-tat.postedit.txt
Test file: 'texts/4story.kaz-tat.02_09_2012.txt'
Reference file 'texts/4story.kaz-tat.postedit.txt'

Statistics about input files
-------------------------------------------------------
Number of words in reference: 534
Number of words in test: 536
Number of unknown words (marked with a star) in test: 2
Percentage of unknown words: 0.37 %

Results when removing unknown-word marks (stars)
-------------------------------------------------------
Edit distance: 133
Word error rate (WER): 24.91 %
Number of position-independent correct words: 413
Position-independent word error rate (PER): 23.03 %

Results when unknown-word marks (stars) are not removed
-------------------------------------------------------
Edit distance: 135
Word Error Rate (WER): 25.28 %
Number of position-independent correct words: 411
Position-independent word error rate (PER): 23.41 %

Statistics about the translation of unknown words
-------------------------------------------------------
Number of unknown words which were free rides: 2
Percentage of unknown words that were free rides: 100.00 %
</pre>

=== Wikipedia article ===

<pre>
selimcan@achilles:~/src/apertium/incubator/apertium-kaz-tat$ perl ../../trunk/apertium-eval-translator/apertium-eval-translator.pl -test texts/5story.kaz-tat.01_09_2012.txt -ref texts/5story.kaz-tat.postedit.txt
Test file: 'texts/5story.kaz-tat.01_09_2012.txt'
Reference file 'texts/5story.kaz-tat.postedit.txt'

Statistics about input files
-------------------------------------------------------
Number of words in reference: 515
Number of words in test: 518
Number of unknown words (marked with a star) in test:
Percentage of unknown words: 0.00 %

Results when removing unknown-word marks (stars)
-------------------------------------------------------
Edit distance: 61
Word error rate (WER): 11.84 %
Number of position-independent correct words: 460
Position-independent word error rate (PER): 11.26 %

Results when unknown-word marks (stars) are not removed
-------------------------------------------------------
Edit distance: 61
Word Error Rate (WER): 11.84 %
Number of position-independent correct words: 460
Position-independent word error rate (PER): 11.26 %

Statistics about the translation of unknown words
-------------------------------------------------------
Number of unknown words which were free rides: 0
Percentage of unknown words that were free rides: 0%
</pre>


== Notes ==
== Notes ==

Latest revision as of 19:47, 8 May 2014

This was a workplan for development efforts for Kazakh and Tatar translator in Google Summer of Code 2012.

  • Trimmed coverage means the coverage of both of the morphological analysers after being trimmed according to the bilingual dictionary of the pair, that is, only containing stems which are also in the bilingual dictionary.
  • Testvoc for a category means that the category is testvoc clean, in both translation directions.
  • Evaluation is taking words and performing an evaluation for post-edition word error rate (WER). The output for those words should be clean.

("news" is tat.news.2005-2011_300K-sentences.txt.bz2, "wp" - kaz.fixed.wikipedia.2010_100k-sentences.txt.bz2 [1])

Week Dates B-trimmed Cov. Goal Reached (news, wp) Testvoc Completed Evaluation WER Notes
0 23/04—21/05 45% ??, ?? <postadv> <ij> 500 words 13.39 % Preliminary evaluation. Translate the story total coverage and without diagnostics (in both directions). Get a baseline WER. Work on disambiguation, the morphological ambiguities in the story should be resolved.
1 21/05—27/05 50% ??, ?? <num> <post> - Basic numerals, and postpositions should be clean.
2 28/05—03/06 53% 56.7%, 57.4% <cnjcoo> <cnjadv> <cnjsub> -
3 04/06—10/06 59% 59.3%, 59.8% <adv> <abbr> 200 words 29.51% Work on disambiguation.
4 11/06—17/06 63% 59.3%, 59.9% <prn> <det> No -
5 18/06—24/06 68% 67.1%, 64.3%[2] <adj> <adj><advl> - Adjectives with attributive and adverbial function should be clean. Numerals + affixes should have their phonology solved.
6 25/06—01/07 70% 73.4%, 70.3% <n> <num><subst> <np> <adj><subst> 500 words 23.79%[3] Midterm evaluation. Work on disambiguation.
7 02/07—08/07 73% 73.5%, 70.4% - Worked on translation of kaz.crp.txt
8 09/07—15/07 75% 76.2%, 72.7% <prn> No - Pronouns should be testvoc clean.
9 16/07—22/07 77% 77.7%, 75.5% - 200 words Work on disambiguation.
10 23/07—29/07 80% 79.1%, 76.8% <v> No - Verbs should be testvoc clean.
11 30/07—05/08 80% 79.9%, 77.4% - - Work on disambiguation.
12 06/08—12/08 80% all categories clean 500 words Final evaluation (see below). Tidying up, releasing

Final evaluation[edit]

Sources of texts can be find here: README

Azattyq news[edit]

selimcan@achilles:~/src/apertium/incubator/apertium-kaz-tat$ perl ../../trunk/apertium-eval-translator/apertium-eval-translator.pl -test texts/4story.kaz-tat.02_09_2012.txt -ref texts/4story.kaz-tat.postedit.txt 
Test file: 'texts/4story.kaz-tat.02_09_2012.txt'
Reference file 'texts/4story.kaz-tat.postedit.txt'

Statistics about input files
-------------------------------------------------------
Number of words in reference: 534
Number of words in test: 536
Number of unknown words (marked with a star) in test: 2
Percentage of unknown words: 0.37 %

Results when removing unknown-word marks (stars)
-------------------------------------------------------
Edit distance: 133
Word error rate (WER): 24.91 %
Number of position-independent correct words: 413
Position-independent word error rate (PER): 23.03 %

Results when unknown-word marks (stars) are not removed
-------------------------------------------------------
Edit distance: 135
Word Error Rate (WER): 25.28 %
Number of position-independent correct words: 411
Position-independent word error rate (PER): 23.41 %

Statistics about the translation of unknown words
-------------------------------------------------------
Number of unknown words which were free rides: 2
Percentage of unknown words that were free rides: 100.00 %

Wikipedia article[edit]

selimcan@achilles:~/src/apertium/incubator/apertium-kaz-tat$ perl ../../trunk/apertium-eval-translator/apertium-eval-translator.pl -test texts/5story.kaz-tat.01_09_2012.txt -ref texts/5story.kaz-tat.postedit.txt 
Test file: 'texts/5story.kaz-tat.01_09_2012.txt'
Reference file 'texts/5story.kaz-tat.postedit.txt'

Statistics about input files
-------------------------------------------------------
Number of words in reference: 515
Number of words in test: 518
Number of unknown words (marked with a star) in test: 
Percentage of unknown words: 0.00 %

Results when removing unknown-word marks (stars)
-------------------------------------------------------
Edit distance: 61
Word error rate (WER): 11.84 %
Number of position-independent correct words: 460
Position-independent word error rate (PER): 11.26 %

Results when unknown-word marks (stars) are not removed
-------------------------------------------------------
Edit distance: 61
Word Error Rate (WER): 11.84 %
Number of position-independent correct words: 460
Position-independent word error rate (PER): 11.26 %

Statistics about the translation of unknown words
-------------------------------------------------------
Number of unknown words which were free rides: 0
Percentage of unknown words that were free rides: 0%

Notes[edit]

  1. Despite the fact that both lexicons have the same amount of stems, transducer for Tatar usually has a lower coverage. This is probably because some forms aren't implemented yet, e.g. some verbal forms. Here I give numbers for both directions. Note that domains are different.
  2. As of Tuesday 26th June
  3. apertium-kaz-tat$ perl ../../trunk/apertium-eval-translator/apertium-eval-translator.pl -t texts/3story.kaz-tat.12072012.txt -r texts/3story.kaz-tat.postedit.txt