Building a pseudo-parallel corpus

From Apertium
Revision as of 22:39, 23 August 2012 by Fpetkovski (talk | contribs)
Jump to navigation Jump to search

Acquiring parallel corpora can be a difficult process and for some language pairs such resources might not exist. However, we can use a language model for the target language in order to create pseudo-parallel corpora, and use them in the same way as parallel ones.


IRSTLM is a tool for building n-gram language models from corpora. It supports different smoothing and interpolation methods, including Written-Bell smoothing, Kneser-Ney smoothing and others. The full documentation can be viewed here, and the whole toolkit can be downloaded here

Building a pseudo-parallel corpus

The main idea is to get a source-language corpus and run it through the apertium pipeline, but this time let the language model choose the preposition instead of apertium. The main algorithm is as follows (example for mk-en):

  • run the corpus through mk-en-biltrans
  • Run through apertium-lex-tools/scripts/ to select the defaults for words with a POS different from <pr>. This step is necessary in order not to get an explosion of the possible TL sentences.
  • Run through apertium-lex-tools/scripts/ to expand the biltrans sentence to cover all the possible lexical transfers. You can also use apertium-lex-tools/scripts/, a slower version but uses less memory.
  • Run through the rest of the pipeline from apertium-transfer -b onwards to get target-language sentences
  • Run through apertium-lex-learner/irstlm-ranker-max

This way for each expanded translation you will get source-language to target-language probability for each SL:TL pair. The most probable translation will be marked with |@|.