Difference between revisions of "Apertium-regtest"
Popcorndude (talk | contribs) m (Popcorndude moved page User:Popcorndude/Regression-Testing to Apertium-regtest: time make this official) |
Popcorndude (talk | contribs) |
||
Line 1: | Line 1: | ||
Apertium-regtest is a program for managing regression tests and [[Corpus test|corpora]]. |
|||
For each input, rather than treating the output as correct or incorrect, Apertium-regtest has three possible designations: incorrect, expected, and gold. Expected outputs are what the pipeline should produce, given the rules that have been written whereas gold is what the output would be if the pipeline were perfect. This allows us to use the same tests to ensure that there are no regressions and also measure the quality of the translator. |
|||
See https://github.com/TinoDidriksen/regtest/wiki for examples of what using it might look like in practice. |
|||
== Overview == |
== Overview == |
||
The regression testing system |
The regression testing system runs a corpus through a pipeline (whether analysis or translation or whatever) recording the output of each step. These outputs will be compared to the expected outputs and an optional list of ideal outputs. |
||
If the actual output is different from the expected or the ideal, this counts as a failing test. |
If the actual output is different from the expected or the ideal, this counts as a failing test. |
||
Line 103: | Line 103: | ||
Any test which does not correspond to an existing mode or the beginning of an existing mode will use <code>command</code>. |
Any test which does not correspond to an existing mode or the beginning of an existing mode will use <code>command</code>. |
||
In any |
In any module for which I cannot find existing tests, I will select a few random forms from the analyzer as the corpus. <code>gold.txt</code> will be left empty. |
Revision as of 20:28, 23 July 2021
Apertium-regtest is a program for managing regression tests and corpora.
For each input, rather than treating the output as correct or incorrect, Apertium-regtest has three possible designations: incorrect, expected, and gold. Expected outputs are what the pipeline should produce, given the rules that have been written whereas gold is what the output would be if the pipeline were perfect. This allows us to use the same tests to ensure that there are no regressions and also measure the quality of the translator.
Contents
Overview
The regression testing system runs a corpus through a pipeline (whether analysis or translation or whatever) recording the output of each step. These outputs will be compared to the expected outputs and an optional list of ideal outputs.
If the actual output is different from the expected or the ideal, this counts as a failing test.
Differences are presented to the developer who can choose to accept some or all of the actual output as the new expected output.
In this way, the regression tests help to ensure that changes to the system do not cause anything to get worse and that the test data is an accurate reflection of the current state of the system while minimizing the effort required of the developer to keep things up-to-date.
Specification
The test runner can be run in either static mode (which functions as a test that can pass or fail) or in interactive mode (which updates the data to reflect the state of the translator).
The test runner will by default check for a file named tests/tests.json
. This file will contain one or more entries of the form
{[name]: "mode": [mode-name] "input": [file-name]}
Input Corpus
Where name
is the name of this corpus, mode-name
names a pipeline mode (usually abc-xyz
or xyz-abc
), and the value of "input":
is a text file where each line contains an input sentence. Line breaks can be included in the input by writing \n
and comments beginning with #
will be ignored.
=== Mode Specification
The mode will be read from modes.xml
and each step will be named in the same fashion as gendebug="yes"
. That is, using debug-suff
is present, otherwise trying to guess a standard suffix, and finally falling back to NAMEME
. If more than one step has the same debug suffix, they will be numbered sequentially.
If the input file is not intended to be passed through the entire pipeline, the option "start-step": [suffix]
can be added, where suffix
is the suffix of one of the steps in the pipeline.
If the test does not correspond to any pipeline in modes.xml
, mode: [mode-name]
can be replaced with "command": [cmd]
where cmd
is an arbitrary bash command which will be run in the main directory of the repository. For the purposes of expected.txt
and gold.txt
, this will be treated as a pipeline containing a single step named all
.
Other
For each step, the test runner will check for files named [name].[step-name].expected.txt
and [name].[step-name].gold.txt
in the same directory as the input file.
expected.txt
is assumed to be the output of a previous run and gold.txt
is assumed to be the ideal output. gold.txt
can contain multiple ideal outputs for each line.
An individual input line is considered a passing test if it appears in either expected.txt
or gold.txt
for each of the relevant steps and failing otherwise. By default only the final step of the pipeline is considered relevant. A list of relevant steps can be provided by setting "relevant": [suffixes...]
(for example, "relevant": ["morph", "transfer", "postgen"]
).
In dynamic mode, differences between the output and the files will be presented to the user, who will have the option to add the output to either file.
See https://github.com/TinoDidriksen/regtest/wiki for images of what the workflow in dynamic mode might look like. A command line interface may also be available.
Repository Structure
If the test runner does not find a file named tests/tests.json
, it will guess that the tests live in test-[name]
(for example apertium-eng
would have test repository test-eng
) and offer to clone that repository if it exists.
File Structure Example
apertium-wad/test/tests.json
{ "general": { "mode": "wad-tagger", "input": "general-input.txt" } }
apertium-wad/test/general-input.txt
wona pasi siri muandu
apertium-wad/test/general-disam-expected.txt
Each entry is delimited by blanks containing the hash of the corresponding input line in order to track insertions and deletions in the input file and also so that line breaks in the input will not cause problems.
The lines are sorted by hash rather than being in the same order as the input for simplicity and to minimize the diffs resulting from reorganizing the input.
[APxOFSXUCZrF#0] ^muandu/muandu<num>$ [/APxOFSXUCZrF] [ZeKQm_Ed8zYn#0] ^wona/wona<n>$ ^pasi/pa<det><def><mid><pl><nh>$ [/ZeKQm_Ed8zYn] [ge00E0i-0UxQ#0] ^siri/ra<v><p3><pl><nh><o3sg>/siri<num>/ri<v><p3><pl><nh>$ [/ge00E0i-0UxQ]
apertium-wad/test/general-disam-gold.txt
Like the expected output, gold output is delimited and sorted by hash. Multiple possible ideals are separated by [/option]
.
[ZeKQm_Ed8zYn] ^wona/wona<n>$ ^pasi/pa<det><def><mid><pl><nh>$ [/option] [/ZeKQm_Ed8zYn]
Conversion Process
Any existing tests will be converted to this format with their inputs placed in an input file and their outputs in the appropriate gold.txt
. All expected.txt
files will be filled in with the current output of the pipeline.
Any test which does not correspond to an existing mode or the beginning of an existing mode will use command
.
In any module for which I cannot find existing tests, I will select a few random forms from the analyzer as the corpus. gold.txt
will be left empty.