User:Ilnar.salimzyan/On testing
< User:Ilnar.salimzyan
Jump to navigation
Jump to search
Revision as of 00:48, 9 March 2015 by Ilnar.salimzyan (talk | contribs)
Test-driven language pair development or on testing strategy in Apertium ======================================================================== Some terminology ---------------- # Acceptance tests define when a language pair or a particular sub-module (like # morphological transducer or CG) are done. That is, they define the # requirements for what you are going to develop and are written in the process # of communicating with "stakeholders"/mentors or anyone funding the # development. # Unit tests are written by programmers for programmers. They describe how the # system works and what the structure and behavior of the code is. # Integration tests, as the name suggests, test whether components (in our case, # these are modules like morphological transducers, disambiguators, lexical # selection rules and transfer rules) are successfully integrated into a system # (=language pair). In case of a language pair, you can think of the acceptance # tests for that language pair as integration tests, since they test how modules # of the language pair integrate into a complete machine translation system. Overview -------- Testing an Apertium MT system / \ / \ / \ Acceptance Unit tests tests | / testing the output / of each module / | * Regression-Pending tests Morphological--Acceptance: * recall or coverage on the wiki transducers * precision * Corpus test | \ * # of stems * upper bound for WER | --Unit: * morphophonology * upper bounds for [*@#] | * morphotactics errors ConstraintGr----Acceptance: * ambig. rate before&after * Testvoc (has to be clean) | \ * precision | \ * # of rules + | ----Unit: * INPUT/OUTPUT comments for each | rule numbers: Lexical-----Acceptance: * ambig. rate before&after * of stems in the bidix selection * precision * of lrx rules <-> ambiguity | \ * # of rules rate before and after | ----Unit: * INPUT/OUTPUT comments for each * transfer rules | rule Transfer------Acceptance: * wiki tests for phrases and + \ sentences \ * "testvoc-lite" tests for (gisting evaluation) \ single-words ----Unit: * INPUT/OUTPUT comments in the headers of rules
Cheat sheet
+ In apertium-tat-rusCommand | When |
---|---|
./qa.sh t1x | apertium-tat-rus.tat-rus.t1x changed |
./qa.sh t2x | apertium-tat-rus.tat-rus.t2x changed |
./qa.sh t3x | apertium-tat-rus.tat-rus.t3x changed |
./qa.sh t4x | apertium-tat-rus.tat-rus.t4x changed |
./qa.sh | Before commiting |
./qa.sh corp | Corpus test ('./qa.sh' will do this) |
./wiki-tests.sh Regression tat rus [update] | 'update' if Tatar and Russian/Regression tests page changed
|
./qa.sh testvoc reg | currently local tests in testvoc/lite/regression.txt ('./qa.sh' will do this) |
Command | When |
---|---|
./wiki-tests Regression kaz kaz [update] | Before committing
|
<selimcan> I was thinking of at least keeping a list of lexicons which stems can continue with (i.e. directly) separate in lexc <selimcan> giving examples for each <selimcan> I mean, right before the stems section, a litst of N1, N2, N-RUS, V-TV, etc <selimcan> *list <selimcan> with short comment and examples for each <selimcan> Lexicon : Description : Example <selimcan> N1 : commoun nouns : бақша <selimcan> N5 : nouns loaned from Russian (often don't obey the syngarmonism laws, that's why should be kept separate) : актив <selimcan> N-COMPUND-PX : compound nouns with 3p possessive at the last noun <selimcan> firespeaker, you know, like we do for adjectives, but for only lexicons we have <selimcan> err, "for all lexicons" I mean <selimcan> That kind of comments for all lexicons (stems can link to) we have, and in one place, so that whoever adding stems to the lexicon doesn't have to look at the entire morphology description in lexc <selimcan> Plus a full paradigm of one example word linking to that lexicon in apertium-foo/tests/morphotactics or somewhere else (useful for testvoc and potentially for automatically guessing the paradigm)