Difference between revisions of "Kazakh and Tatar/Work plan"
Jump to navigation
Jump to search
Line 33: | Line 33: | ||
| 10 || <s>23/07—29/07</s> || 80% || 79.1%, 76.8 % || {{tag|v}} || No || - || || Verbs should be testvoc clean. |
| 10 || <s>23/07—29/07</s> || 80% || 79.1%, 76.8 % || {{tag|v}} || No || - || || Verbs should be testvoc clean. |
||
|- |
|- |
||
| 11 || 30/07—05/08 || 80% || |
| 11 || <s>30/07—05/08</s> || 80% || 79.9%, || - || || - || || Work on disambiguation. |
||
|- |
|- |
||
| 12 || 06/08—12/08 || 80% || || ''all categories clean'' || || 500 words || || '''Final evaluation'''. Tidying up, releasing |
| 12 || 06/08—12/08 || 80% || || ''all categories clean'' || || 500 words || || '''Final evaluation'''. Tidying up, releasing |
Revision as of 11:20, 6 August 2012
This is a workplan for development efforts for Kazakh and Tatar translator in Google Summer of Code 2012.
- Trimmed coverage means the coverage of both of the morphological analysers after being trimmed according to the bilingual dictionary of the pair, that is, only containing stems which are also in the bilingual dictionary.
- Testvoc for a category means that the category is testvoc clean, in both translation directions.
- Evaluation is taking words and performing an evaluation for post-edition word error rate (WER). The output for those words should be clean.
("news" is tat.news.2005-2011_300K-sentences.txt.bz2
, "wp" - kaz.fixed.wikipedia.2010_100k-sentences.txt.bz2
[1])
Week | Dates | B-trimmed Cov. Goal | Reached (news, wp) | Testvoc | Completed | Evaluation | WER | Notes |
---|---|---|---|---|---|---|---|---|
0 | 45% | ??, ?? | <postadv> <ij> |
500 words | 13.39 % | Preliminary evaluation. Translate the story total coverage and without diagnostics (in both directions). Get a baseline WER. Work on disambiguation, the morphological ambiguities in the story should be resolved. | ||
1 | 50% | ??, ?? | <num> <post> |
- | Basic numerals, and postpositions should be clean. | |||
2 | 53% | 56.7%, 57.4% | <cnjcoo> <cnjadv> <cnjsub> |
- | ||||
3 | 59% | 59.3%, 59.8% | <adv> <abbr> |
200 words | 29.51% | Work on disambiguation. | ||
4 | 63% | 59.3%, 59.9% | <prn> <det> |
No | - | |||
5 | 68% | 67.1%, 64.3%[2] | <adj> <adj><advl> |
- | Adjectives with attributive and adverbial function should be clean. Numerals + affixes should have their phonology solved. | |||
6 | 70% | 73.4%, 70.3% | <n> <num><subst> <np> <adj><subst> |
500 words | 23.79%[3] | Midterm evaluation. Work on disambiguation. | ||
7 | 73% | 73.5%, 70.4% | - | Worked on translation of kaz.crp.txt | ||||
8 | 75% | 76.2%, 72.7% | <prn> |
No | - | Pronouns should be testvoc clean. | ||
9 | 77% | 77.7%, 75.5% | - | 200 words | Work on disambiguation. | |||
10 | 80% | 79.1%, 76.8 % | <v> |
No | - | Verbs should be testvoc clean. | ||
11 | 80% | 79.9%, | - | - | Work on disambiguation. | |||
12 | 06/08—12/08 | 80% | all categories clean | 500 words | Final evaluation. Tidying up, releasing |
Notes
- ↑ Despite the fact that both lexicons have the same amount of stems, transducer for Tatar usually has a lower coverage. This is because some forms aren't implemented yet, e.g. some verbal forms. Here I gave numbers for both directions, although the domains are different. Not too sure about it, hope this is ok.
- ↑ As of Tuesday 26th June
- ↑ apertium-kaz-tat$ perl ../../trunk/apertium-eval-translator/apertium-eval-translator.pl -t texts/3story.kaz-tat.12072012.txt -r texts/3story.kaz-tat.postedit.txt