Difference between revisions of "Building a pseudo-parallel corpus"

From Apertium
Jump to navigation Jump to search
Line 17: Line 17:


</pre>
</pre>

optimization

Revision as of 10:49, 23 August 2012

Acquiring parallel corpora can be a difficult process and for some language pairs such resources might not exist. However, we can use a language model for the target language in order to create pseudo-parallel corpora, and use them in the same way as parallel ones.

IRSTLM

IRSTLM is a tool for building n-gram language models from corpora. It supports different smoothing and interpolation methods, including Written-Bell smoothing, Kneser-Ney smoothing and others.

The full documentation can be viewed here, and the whole toolkit can be downloaded here

Building a pseudo-parallel corpus

The main idea is to get a source-language corpus and run it through the apertium pipeline, but this time let the language model choose the preposition instead of apertium. The main algorithm is as follows (example for mk-en):

run the corpus through mk-en-biltrans
Run through <code>apertium-lex-tools/scripts/biltrans-to-multitrans.py</code>
Run through the rest of the pipeline from apertium-transfer -b onwards
Run through <code>apertium-lex-learner/irstlm-ranker<code>

optimization