Generating lexical-selection rules from monolingual corpora

From Apertium
Jump to navigation Jump to search

This page describes how to generate lexical selection rules without relying on a parallel corpus.


  • apertium-lex-tools
  • A language pair (e.g. apertium-br-fr)
    • The language pair should have the following two modes:
      • -multi which is all the modules after lexical transfer (see apertium-mk-en/modes.xml)
      • -pretransfer which is all the modules up to lexical transfer (see apertium-mk-en/modes.xml)


Important: If you don't want through the whole process step by step, you can use the Makefile script provided in the last section of this page.

We're going to do the example with EuroParl and the English to Spanish pair in Apertium.

Given that you've got all the stuff installed, the work will be as follows:

Take your corpus and make a tagged version of it:

cat | apertium-destxt | apertium -f none -d ~/source/apertium/apertium-en-es en-es-pretransfer >

Make an ambiguous version of your corpus and trim redundant tags:

cat | python ~/source/apertium/apertium-lex-tools/multitrans ~/source/apertium/apertium-en-es/en-es.autobil -b -f -t -n >

Next, generate all the possible disambiguation paths while trimming redundant tags:

cat | ~/source/apertium/apertium-lex-tools/multitrans ~/source/apertium/apertium-en-es/en-es.autobil -m -f -t -n >

Translate and score all possible disambiguation paths:

cat | python ~/source/apertium/apertium-lex-tools/multitrans ~/source/apertium/apertium-en-es/en-es.autobil -m -f -n |
apertium -f none -d ~/source/apertium/apertium-en-es en-es-multi | ~/source/apertium/apertium-lex-tools/irstlm-ranker 
~/source/corpora/lm/en.blm -f >

Now we have a pseudo-parallel corpus where each possible translation is scored. We start by extracting a frequency lexicon:

	python3 ~/source/apertium/apertium-lex-tools-scripts/ > europarl.en-es.freq
	python3 ~/source/apertium/apertium-lex-tools-scripts/  europarl.en-es.freq > europarl.en-es.freq.lrx
	lrx-comp europarl.en-es.freq.lrx europarl.en-es.freq.lrx.bin

From here on, we have two paths we can choose. We can extract rules using a maximum entropy classifier, or we can extract rules based only on the scores provided by irstlm-ranker.

Direct rule extraction

When using this method, we directly continue with extracting ngrams from the pseudo parallel corpus:

python3 ~/source/apertium/apertium-lex-tools/scripts/ europarl.en-es.freq > ngrams

Next, we prune the generated ngrams:

python3 ~/source/apertium/apertium-lex-tools/scripts/ europarl.en-es.freq ngrams > patterns

Finally, we generate and compile lexical selection rules while thresholding their irstlm-score

python3 ~/source/apertium/apertium-lex-tools/scripts// patterns $crisphold > patterns.lrx
lrx-comp patterns.lrx patterns.lrx.bin

Maximum entropy rule extraction

When extracting rules using a maximum entropy criterion, we first extract features which we are going to feed to a classifier:

python3 ~/source/apertium/apertium-lex-tools/scripts/ europarl-en-es.freq 
europarl.en-es.ambig europarl.en-es.annotated > events 2>ngrams

We then train classifiers which as a side effect score how much each ngram contributes to a certain translation:

cat events | grep -v -e '\$$ 0\.0 #' -e '\$$ 0 #' > events.trimmed
cat events.trimmed | python ~/source/apertium/apertium-lex-tools/scripts/ $(YASMET) > all-lambdas
python3 ~/source/apertium-lex-tools/scripts/ ngrams all-lambdas > rules-all

Finally, we extract ngrams:

python3 ~/source/apertium-lex-tools/scripts/ europarl-en-es.freq rules-all > ngrams-all

we trim them:

python3 ~/source/apertium-lex-tools/scripts/ ngrams-all > ngrams-trimmed

and generate lexical selection rules:

python3 ~/source/apertium-lex-tools/scripts/ ngrams-trimmed > europarl.en-es.lrx.bin