Difference between revisions of "User:Ilnar.salimzyan/GSoC2014"

From Apertium
Jump to navigation Jump to search
Line 50: Line 50:
 
# parallel corpus in /corpus (=development corpus) is expanded with texts which represent domains
 
# parallel corpus in /corpus (=development corpus) is expanded with texts which represent domains
 
the system could potentially be applied to (500 sentences?)
 
the system could potentially be applied to (500 sentences?)
# tat-rus-t1x.test, tat-rus-t2x.test, tat-rus-t3x.test and tat-rus-transfer.test which will run all three
+
# tat-rus-t1x.test, tat-rus-t2x.test, tat-rus-t3x.test and tat-rus-transfer.test which will run
  +
all three
# multiword pending tests on the wiki which kind of cover the core of the desired functionality (at least
+
# multiword pending tests on the wiki which kind of cover the core of the desired functionality
52 "sentence models" listed in the "Tatar Syntax" book)
+
(at least 52 "sentence models" listed in the "Tatar Syntax" book)
# "workplan" and "current state" tables on [[Tatar and Russian]] page which will track progress on things
+
# "workplan" and "current state" tables on [[Tatar and Russian]] page which will track progress
I've promised to do in the proposal (on week end, 17-18 May)
+
on things I've promised to do in the proposal (on week end, 17-18 May)
 
</pre>
 
</pre>
   
Line 69: Line 70:
 
Have a look at the apertium-rus/tests/rus.test (run by './qa.sh rus' command in apertium-rus/ directory).
 
Have a look at the apertium-rus/tests/rus.test (run by './qa.sh rus' command in apertium-rus/ directory).
   
That test alone greatly reduces the fear to modify apertium-rus, since it provides "some "invariant" that lets us know when we've changed the behavior of the system. The key thing is that correct behavior is defined by what the set of classes did yesterday, not by any external standard of correctness". <ref>Michael Feathers (2002). Working effectively with legacy code.</ref>.
+
That test alone greatly reduces the fear to modify apertium-rus, since it provides "some "invariant" that lets us know when we've changed the behavior of the system. The key thing is that correct behavior is defined by what the set of classes did yesterday, not by any external standard of correctness". <ref>Michael Feathers (2002). Working effectively with legacy code.</ref>
  +
  +
=== Unit tests for transfer ===
  +
  +
== References ==
   
 
<references/>
 
<references/>

Revision as of 11:11, 16 May 2014

Apertium-tat-rus – machine translation system from Tatar to Russian

This page is used to organize thoughts and document the development process. If you are only interested in the workplan and stats, refer to the 'Workplan' and 'Current state' sections of the Tatar and Russian page.

Post-application period

* work on the 'James and Mary' translation
    ** get rid of the debugging symbols
    ** get the baseline WER
* get permission to use one of the modern government-funded Tatar-Russian
  dictionaries under a free license and digitize it or fall back to one of
  the dictionaries in the public domain and scan that
* read documentation on chunking based-transfer and papers describing other
  Apertium pairs for distant languages

'James and Mary' translation

Story is in corpus/corpus.tat.txt (first 50 lines). There are no [*@#] errors as of r52944. WER is 71.84%, PER 55.26%.

Bilingual dictionary

At least we can look up stems from apertium-tat. I am working on getting something bigger than that.

Literature review

(Ideas to try out and notes)

  • sme-nob paper, eus-eng paper, eng-kaz paper
    • use macros, but try to avoid variables. "Adapt" macros as seen in some of the hbs pairs can help with that.
    • it's possible to use twol to forbid some analyses (hopefully we won't have to use that, but good to know that it's possible).

Other thoughts:

  • acceptance tests for an Aperitum MT system are: regression tests on the wiki, corpus test (WER and number of [*@#] errors) and testvoc. Unit testing an Apertium MT system is testing its modules (modes). Figure out how to unit test each module.
    • one should be able to run his tests without the internet connection. Keeping a copy of the 'regression tests' html page in the /dev solves this problem, but it doesn't allow us to add new tests while not having internet access. One way to deal with that is to have a local copy of regression tests in the wiki format, so that if you add new test while flying over the atlantic, you can copy paste them to the wiki page of the pair later.

Community-bonding period

'''Deliverables 0:'''
# testvoc script(s) which doesn't take forever to run (consider footnote #5 in the proposal)
# a way to test the apertium-rus generator
# a digital dictionary under a free license or ocr'd public domain dictionary
# parallel corpus in /corpus (=development corpus) is expanded with texts which represent domains 
  the system could potentially be applied to (500 sentences?)
# tat-rus-t1x.test, tat-rus-t2x.test, tat-rus-t3x.test and tat-rus-transfer.test which will run
  all three
# multiword pending tests on the wiki which kind of cover the core of the desired functionality
  (at least 52 "sentence models" listed in the "Tatar Syntax" book)
# "workplan" and "current state" tables on [[Tatar and Russian]] page which will track progress
  on things I've promised to do in the proposal (on week end, 17-18 May)

Testvoc

Ok, the usual testvoc (see apertium-tat-rus/testvoc/standard) works and so far doesn't take too much time to run. We've also set up prefixing system in apertium-tat/tests/morphotactics which, for one word per pardef, provides a text file with the full paradigm of that word.

The apertium-tat-rus/testvoc/lite, which is supposed to take that text files from apertium-tat, extract LU's, run them through inconsistency.sh and generate testvoc-summary file where each line represents stats about each text file doesn't work yet. For the time being, I can can do that manually, simply grep for "[@#]" errors and automate it along the way.

If time permits, would be good to set it up sooner rather than later (and also the corpus testvoc in the same apertium-tat-rus/testvoc directory).

A way to test the Russian generator

Have a look at the apertium-rus/tests/rus.test (run by './qa.sh rus' command in apertium-rus/ directory).

That test alone greatly reduces the fear to modify apertium-rus, since it provides "some "invariant" that lets us know when we've changed the behavior of the system. The key thing is that correct behavior is defined by what the set of classes did yesterday, not by any external standard of correctness". [1]

Unit tests for transfer

References

  1. Michael Feathers (2002). Working effectively with legacy code.