Difference between revisions of "Apertium-regtest"

From Apertium
Jump to navigation Jump to search
(link to structure conversion script)
 
(11 intermediate revisions by the same user not shown)
Line 1: Line 1:
This is a proposal for a regression testing system for use in language modules and translation pairs.
+
Apertium-regtest is a program for managing regression tests and [[Corpus test|corpora]].
   
  +
For each input, rather than treating the output as correct or incorrect, Apertium-regtest has three possible designations: incorrect, expected, and gold. Expected outputs are what the pipeline should produce, given the rules that have been written whereas gold is what the output would be if the pipeline were perfect. This allows us to use the same tests to ensure that there are no regressions and also measure the quality of the translator.
See https://github.com/TinoDidriksen/regtest/wiki for examples of what using it might look like in practice.
 
   
 
== Overview ==
 
== Overview ==
   
The regression testing system will run a corpus through a pipeline (whether analysis or translation or whatever) recording the output of each step. These outputs will be compared to the expected outputs and an optional list of ideal outputs.
+
The regression testing system runs a corpus through a pipeline (whether analysis or translation or whatever) recording the output of each step. These outputs will be compared to the expected outputs and an optional list of ideal outputs.
   
 
If the actual output is different from the expected or the ideal, this counts as a failing test.
 
If the actual output is different from the expected or the ideal, this counts as a failing test.
Line 12: Line 12:
   
 
In this way, the regression tests help to ensure that changes to the system do not cause anything to get worse and that the test data is an accurate reflection of the current state of the system while minimizing the effort required of the developer to keep things up-to-date.
 
In this way, the regression tests help to ensure that changes to the system do not cause anything to get worse and that the test data is an accurate reflection of the current state of the system while minimizing the effort required of the developer to keep things up-to-date.
  +
  +
=== Typical Workflow ===
  +
  +
# run <code>apertium-regtest web</code> and open the link in your browser
  +
# make changes to dictionaries, transfer rules, etc.
  +
# recompile
  +
# in the browser, select one or all of the corpora to rerun tests for
  +
# if any of the entries in the corpus have gotten worse, return to step 2
  +
# accept changes
  +
# commit changed test files along with dictionaries, transfer rules, etc.
   
 
== Specification ==
 
== Specification ==
Line 17: Line 27:
 
The test runner can be run in either static mode (which functions as a test that can pass or fail) or in interactive mode (which updates the data to reflect the state of the translator).
 
The test runner can be run in either static mode (which functions as a test that can pass or fail) or in interactive mode (which updates the data to reflect the state of the translator).
   
The test runner will by default check for a file named <code>tests/tests.yaml</code>. This file will contain one or more entries of the form
+
The test runner will by default check for a file named <code>tests/tests.json</code>. This file will contain one or more entries of the form
   
[name]:
+
{[name]:
mode: [mode-name]
+
"mode": [mode-name]
input: [file-name]
+
"input": [file-name]}
   
  +
=== Input Corpus ===
Where <code>name</code> is the name of this corpus, <code>mode-name</code> names a pipeline mode (usually <code>abc-xyz</code> or <code>xyz-abc</code>), and the value of <code>input:</code> is a text file where each line contains an input sentence.
 
  +
  +
Where <code>name</code> is the name of this corpus, <code>mode-name</code> names a pipeline mode (usually <code>abc-xyz</code> or <code>xyz-abc</code>), and the value of <code>"input":</code> is a text file where each line contains an input sentence. Line breaks can be included in the input by writing <code>\n</code> and comments beginning with <code>#</code> will be ignored.
  +
  +
=== Mode Specification ===
   
 
The mode will be read from <code>modes.xml</code> and each step will be named in the same fashion as <code>gendebug="yes"</code>. That is, using <code>debug-suff</code> is present, otherwise trying to guess a standard suffix, and finally falling back to <code>NAMEME</code>. If more than one step has the same debug suffix, they will be numbered sequentially.
 
The mode will be read from <code>modes.xml</code> and each step will be named in the same fashion as <code>gendebug="yes"</code>. That is, using <code>debug-suff</code> is present, otherwise trying to guess a standard suffix, and finally falling back to <code>NAMEME</code>. If more than one step has the same debug suffix, they will be numbered sequentially.
   
If the input file is not intended to be passed through the entire pipeline, the option <code>start-step: [suffix]</code> can be added, where <code>suffix</code> is the suffix of one of the steps in the pipeline.
+
If the input file is not intended to be passed through the entire pipeline, the option <code>"start-step": [suffix]</code> can be added, where <code>suffix</code> is the suffix of one of the steps in the pipeline.
   
If the test does not correspond to any pipeline in <code>modes.xml</code>, <code>mode: [mode-name]</code> can be replaced with <code>command: [cmd]</code> where <code>cmd</code> is an arbitrary bash command which will be run in the main directory of the repository. For the purposes of <code>expected.txt</code> and <code>gold.txt</code>, this will be treated as a pipeline containing a single step named <code>all</code>.
+
If the test does not correspond to any pipeline in <code>modes.xml</code>, <code>mode: [mode-name]</code> can be replaced with <code>"command": [cmd]</code> where <code>cmd</code> is an arbitrary bash command which will be run in the main directory of the repository. For the purposes of <code>expected.txt</code> and <code>gold.txt</code>, this will be treated as a pipeline containing a single step named <code>all</code>.
   
  +
=== Directory Structure ===
For each step, the test runner will check for files named <code>[name].[step-name].expected.txt</code> and <code>[name].[step-name].gold.txt</code> in the same directory as the input file.
 
   
  +
There are 4 types of files associated with a particular corpus: input, output, expected, and gold. There are two arrangements of files currently supported: flat and nested.
<code>expected.txt</code> is assumed to be the output of a previous run and <code>gold.txt</code> is assumed to be the ideal output. <code>gold.txt</code> can contain multiple ideal outputs for each line, separated by tabs.
 
   
In static mode, if the output of a step does not appear in either <code>expected.txt</code> or <code>gold.txt</code>, the test fails.
+
In flat mode, all files are placed in the same directory. In nested mode, output, expected, and gold each have a separate subdirectory within <code>test/</code>.
  +
  +
Flat mode is the default, but nested can be specified by adding the following to <code>test/tests.json</code>:
  +
  +
"settings": {
  +
"structure": "nested"
  +
}
  +
  +
A flat directory can be automatically converted to a nested one using [https://github.com/apertium/apertium-regtest/blob/master/tools/flat2nest.py this script].
  +
  +
{|class="wikitable"
  +
! name || description || flat filename || nested filename
  +
|-
  +
| input || text to be passed through the pipeline || specified in <code>test/tests.json</code> || specified in <code>test/tests.json</code>
  +
|-
  +
| output || temporary files containing the current output of the pipeline || <code>test/[corpus].[step].output.txt</code> || <code>test/output/[corpus].[step].txt</code>
  +
|-
  +
| expected || the output of the pipeline as of the last commit || <code>test/[corpus].[step].expected.txt</code> || <code>test/expected/[corpus].[step].txt</code>
  +
|-
  +
| gold || ideal outputs || <code>test/[corpus].[step].gold.txt</code> || <code>test/gold/[corpus].[step].txt</code>
  +
|}
  +
  +
=== Other ===
  +
  +
An individual input line is considered a passing test if it either matches expected or appears in gold for each of the relevant steps and failing otherwise. By default, only the final step of the pipeline is considered relevant. A list of relevant steps can be provided by setting <code>"relevant": [suffixes...]</code> (for example, <code>"relevant": ["morph", "transfer", "postgen"]</code>).
   
 
In dynamic mode, differences between the output and the files will be presented to the user, who will have the option to add the output to either file.
 
In dynamic mode, differences between the output and the files will be presented to the user, who will have the option to add the output to either file.
   
See https://github.com/TinoDidriksen/regtest/wiki for images of what the workflow in dynamic mode might look like. A command line interface may also be available.
+
See https://github.com/TinoDidriksen/regtest/wiki for images of what the workflow in dynamic mode might look like. A command line interface is also be available.
   
 
== Repository Structure ==
 
== Repository Structure ==
   
If the test runner does not find a file named <code>tests/tests.yaml</code>, it will guess that the tests live in <code>tests-[name]</code> (for example <code>tests-eng</code>) and offer to clone that repository if it exists.
+
If the test runner does not find a file named <code>tests/tests.json</code>, it will guess that the tests live in <code>test-[name]</code> (for example <code>apertium-eng</code> would have test repository <code>test-eng</code>) and offer to clone that repository if it exists.
  +
  +
== File Structure Example ==
  +
  +
=== <code>apertium-wad/test/tests.json</code> ===
  +
  +
In general, this file is set up once at the beginning and rarely edited after that.
  +
  +
<pre>
  +
{
  +
"general": {
  +
"mode": "wad-tagger",
  +
"input": "general-input.txt"
  +
}
  +
}
  +
</pre>
  +
  +
=== <code>apertium-wad/test/general-input.txt</code> ===
  +
  +
Any sentence, phrase, or form that you get work or are trying to get working can and should be added to one of the input files.
  +
  +
<pre>
  +
wona pasi
  +
siri
  +
muandu
  +
</pre>
  +
  +
=== <code>apertium-wad/test/general-disam-expected.txt</code> ===
  +
  +
Each entry is delimited by blanks containing the hash of the corresponding input line in order to track insertions and deletions in the input file and also so that line breaks in the input will not cause problems.
  +
  +
The lines are sorted by hash rather than being in the same order as the input for simplicity and to minimize the diffs resulting from reorganizing the input.
  +
  +
Expected files intend to show exactly the current output of the system and thus should not be edited by hand.
  +
  +
<pre>
  +
[APxOFSXUCZrF#0] ^muandu/muandu<num>$
  +
[/APxOFSXUCZrF]
  +
[ZeKQm_Ed8zYn#0] ^wona/wona<n>$ ^pasi/pa<det><def><mid><pl><nh>$
  +
[/ZeKQm_Ed8zYn]
  +
[ge00E0i-0UxQ#0] ^siri/ra<v><p3><pl><nh><o3sg>/siri<num>/ri<v><p3><pl><nh>$
  +
[/ge00E0i-0UxQ]
  +
</pre>
  +
  +
=== <code>apertium-wad/test/general-disam-gold.txt</code> ===
  +
  +
Like the expected output, gold output is delimited and sorted by hash. Multiple possible ideals are separated by <code>[/option]</code>.
  +
  +
Since the entries are delimited by hashes, the recommended way to interact with these files is via the provided interfaces. However, there are instances where you might want to add values directly. For small corpora, you can copy the corresponding <code>expected.txt</code> file and edit the entries. For larger corpora there are some conversion scripts available at https://github.com/apertium/apertium-regtest/tree/master/tools. Bug [[User:Popcorndude]] if you need more of these.
  +
  +
<pre>
  +
[ZeKQm_Ed8zYn]
  +
^wona/wona<n>$ ^pasi/pa<det><def><mid><pl><nh>$ [/option]
  +
[/ZeKQm_Ed8zYn]
  +
</pre>
   
 
== Conversion Process ==
 
== Conversion Process ==
Line 51: Line 143:
 
Any test which does not correspond to an existing mode or the beginning of an existing mode will use <code>command</code>.
 
Any test which does not correspond to an existing mode or the beginning of an existing mode will use <code>command</code>.
   
In any monolingual module for which I cannot find existing, I will select a few random forms from the analyzer as the corpus. In translation pairs, I will take a few sentences from other pairs which use the same languages.
+
In any module for which I cannot find existing tests, I will select a few random forms from the analyzer as the corpus. <code>gold.txt</code> will be left empty.
  +
  +
[[Category:Quality_control]]
  +
[[Category:Documentation_in_English]]

Latest revision as of 22:21, 10 August 2021

Apertium-regtest is a program for managing regression tests and corpora.

For each input, rather than treating the output as correct or incorrect, Apertium-regtest has three possible designations: incorrect, expected, and gold. Expected outputs are what the pipeline should produce, given the rules that have been written whereas gold is what the output would be if the pipeline were perfect. This allows us to use the same tests to ensure that there are no regressions and also measure the quality of the translator.

Overview[edit]

The regression testing system runs a corpus through a pipeline (whether analysis or translation or whatever) recording the output of each step. These outputs will be compared to the expected outputs and an optional list of ideal outputs.

If the actual output is different from the expected or the ideal, this counts as a failing test.

Differences are presented to the developer who can choose to accept some or all of the actual output as the new expected output.

In this way, the regression tests help to ensure that changes to the system do not cause anything to get worse and that the test data is an accurate reflection of the current state of the system while minimizing the effort required of the developer to keep things up-to-date.

Typical Workflow[edit]

  1. run apertium-regtest web and open the link in your browser
  2. make changes to dictionaries, transfer rules, etc.
  3. recompile
  4. in the browser, select one or all of the corpora to rerun tests for
  5. if any of the entries in the corpus have gotten worse, return to step 2
  6. accept changes
  7. commit changed test files along with dictionaries, transfer rules, etc.

Specification[edit]

The test runner can be run in either static mode (which functions as a test that can pass or fail) or in interactive mode (which updates the data to reflect the state of the translator).

The test runner will by default check for a file named tests/tests.json. This file will contain one or more entries of the form

{[name]:
  "mode": [mode-name]
  "input": [file-name]}

Input Corpus[edit]

Where name is the name of this corpus, mode-name names a pipeline mode (usually abc-xyz or xyz-abc), and the value of "input": is a text file where each line contains an input sentence. Line breaks can be included in the input by writing \n and comments beginning with # will be ignored.

Mode Specification[edit]

The mode will be read from modes.xml and each step will be named in the same fashion as gendebug="yes". That is, using debug-suff is present, otherwise trying to guess a standard suffix, and finally falling back to NAMEME. If more than one step has the same debug suffix, they will be numbered sequentially.

If the input file is not intended to be passed through the entire pipeline, the option "start-step": [suffix] can be added, where suffix is the suffix of one of the steps in the pipeline.

If the test does not correspond to any pipeline in modes.xml, mode: [mode-name] can be replaced with "command": [cmd] where cmd is an arbitrary bash command which will be run in the main directory of the repository. For the purposes of expected.txt and gold.txt, this will be treated as a pipeline containing a single step named all.

Directory Structure[edit]

There are 4 types of files associated with a particular corpus: input, output, expected, and gold. There are two arrangements of files currently supported: flat and nested.

In flat mode, all files are placed in the same directory. In nested mode, output, expected, and gold each have a separate subdirectory within test/.

Flat mode is the default, but nested can be specified by adding the following to test/tests.json:

"settings": {
  "structure": "nested"
}

A flat directory can be automatically converted to a nested one using this script.

name description flat filename nested filename
input text to be passed through the pipeline specified in test/tests.json specified in test/tests.json
output temporary files containing the current output of the pipeline test/[corpus].[step].output.txt test/output/[corpus].[step].txt
expected the output of the pipeline as of the last commit test/[corpus].[step].expected.txt test/expected/[corpus].[step].txt
gold ideal outputs test/[corpus].[step].gold.txt test/gold/[corpus].[step].txt

Other[edit]

An individual input line is considered a passing test if it either matches expected or appears in gold for each of the relevant steps and failing otherwise. By default, only the final step of the pipeline is considered relevant. A list of relevant steps can be provided by setting "relevant": [suffixes...] (for example, "relevant": ["morph", "transfer", "postgen"]).

In dynamic mode, differences between the output and the files will be presented to the user, who will have the option to add the output to either file.

See https://github.com/TinoDidriksen/regtest/wiki for images of what the workflow in dynamic mode might look like. A command line interface is also be available.

Repository Structure[edit]

If the test runner does not find a file named tests/tests.json, it will guess that the tests live in test-[name] (for example apertium-eng would have test repository test-eng) and offer to clone that repository if it exists.

File Structure Example[edit]

apertium-wad/test/tests.json[edit]

In general, this file is set up once at the beginning and rarely edited after that.

{
    "general": {
        "mode": "wad-tagger",
        "input": "general-input.txt"
    }
}

apertium-wad/test/general-input.txt[edit]

Any sentence, phrase, or form that you get work or are trying to get working can and should be added to one of the input files.

wona pasi
siri
muandu

apertium-wad/test/general-disam-expected.txt[edit]

Each entry is delimited by blanks containing the hash of the corresponding input line in order to track insertions and deletions in the input file and also so that line breaks in the input will not cause problems.

The lines are sorted by hash rather than being in the same order as the input for simplicity and to minimize the diffs resulting from reorganizing the input.

Expected files intend to show exactly the current output of the system and thus should not be edited by hand.

[APxOFSXUCZrF#0] ^muandu/muandu<num>$
[/APxOFSXUCZrF]
[ZeKQm_Ed8zYn#0] ^wona/wona<n>$ ^pasi/pa<det><def><mid><pl><nh>$
[/ZeKQm_Ed8zYn]
[ge00E0i-0UxQ#0] ^siri/ra<v><p3><pl><nh><o3sg>/siri<num>/ri<v><p3><pl><nh>$
[/ge00E0i-0UxQ]

apertium-wad/test/general-disam-gold.txt[edit]

Like the expected output, gold output is delimited and sorted by hash. Multiple possible ideals are separated by [/option].

Since the entries are delimited by hashes, the recommended way to interact with these files is via the provided interfaces. However, there are instances where you might want to add values directly. For small corpora, you can copy the corresponding expected.txt file and edit the entries. For larger corpora there are some conversion scripts available at https://github.com/apertium/apertium-regtest/tree/master/tools. Bug User:Popcorndude if you need more of these.

[ZeKQm_Ed8zYn]
^wona/wona<n>$ ^pasi/pa<det><def><mid><pl><nh>$ [/option]
[/ZeKQm_Ed8zYn]

Conversion Process[edit]

Any existing tests will be converted to this format with their inputs placed in an input file and their outputs in the appropriate gold.txt. All expected.txt files will be filled in with the current output of the pipeline.

Any test which does not correspond to an existing mode or the beginning of an existing mode will use command.

In any module for which I cannot find existing tests, I will select a few random forms from the analyzer as the corpus. gold.txt will be left empty.