Difference between revisions of "Generating lexical-selection rules from a parallel corpus"

From Apertium
Jump to navigation Jump to search
Line 124: Line 124:
 
<pre>
 
<pre>
 
nohup perl (path to your mosesdecoder)/scripts/training/train-model.perl -external-bin-dir \
 
nohup perl (path to your mosesdecoder)/scripts/training/train-model.perl -external-bin-dir \
~/smt/local/bin -corpus europarl.tag-tok \
+
~/smt/local/bin -corpus europarl.tag-clean \
 
-f en -e es -alignment grow-diag-final-and -reordering msd-bidirectional-fe \
 
-f en -e es -alignment grow-diag-final-and -reordering msd-bidirectional-fe \
 
-lm 0:5:/home/fran/corpora/europarl/europarl.lm:0 >log 2>&1 &
 
-lm 0:5:/home/fran/corpora/europarl/europarl.lm:0 >log 2>&1 &

Revision as of 14:44, 21 September 2013

If you have a parallel corpus, one of the things you can do is generate some lexical selection rules from it, to improve translation of words with more than one possible translation.

You will need

Here is a list of software that you will need installed:

  • Giza++ (or some other word aligner)
  • Moses (for making Giza++ less human hostile)
  • All the Moses scripts
  • lttoolbox
  • Apertium
  • apertium-lex-tools

Furthermore you'll need:

  • an Apertium language pair
  • a parallel corpus (see Corpora)

Installing prerequisites

See Minimal installation from SVN for apertium/lttoolbox.

See Constraint-based lexical selection module for apertium-lex-tools.

For Giza++ and moses-decoder, etc. you can do

$ mkdir ~/smt
$ cd ~/smt
$ mkdir local # our "install prefix"
$ wget https://giza-pp.googlecode.com/files/giza-pp-v1.0.7.tar.gz
$ tar xzvf giza-pp-v1.0.7.tar.gz
$ cd giza-pp
$ make
$ mkdir ../local/bin
$ cp GIZA++-v2/snt2cooc.out ../local/bin/
$ cp GIZA++-v2/snt2plain.out ../local/bin/
$ cp GIZA++-v2/GIZA++ ../local/bin/
$ cp mkcls-v2/mkcls ../local/bin/
$ git clone https://github.com/moses-smt/mosesdecoder
$ cd mosesdecoder/
$ ./bjam 

Now e.g. the clean-corpus and train-model scripts referred to below will be in ~/smt/mosesdecoder/scripts/training/clean-corpus-n.perl See http://www.statmt.org/moses/?n=Development.GetStarted if you want to install the binaries to some other directory.

Getting started

We're going to do the example with EuroParl and the English to Spanish pair in Apertium.

Given that you've got all the stuff installed, the work will be as follows:

Prepare corpus

To generate the rules, we need three files,

  • The tagged and tokenised source corpus
  • The tagged and tokenised target corpus
  • The output of the lexical transfer module in the source→target direction, tokenised

These three files should be sentence aligned.

The first thing that we need to do is tag both sides of the corpus:

$ nohup cat europarl.clean.en | apertium-destxt |\
 apertium -f none -d /home/fran/source/apertium-en-es en-es-pretransfer > europarl.tagged.en &
$ nohup cat europarl.clean.es | apertium-destxt |\
 apertium -f none -d /home/fran/source/apertium-en-es es-en-pretransfer > europarl.tagged.es &

Then we need to remove the lines with no analyses on and replace blanks within lemmas with a new character (we will use `~`):

$ paste europarl.lines europarl.tagged.en europarl.tagged.es | grep '<' | cut -f2 | sed 's/ /~/g' | sed 's/$$[^\^]*/$$ /g' > europarl.tagged.new.es
$ paste europarl.lines europarl.tagged.en europarl.tagged.es | grep '<' | cut -f3 | sed 's/ /~/g' | sed 's/$$[^\^]*/$$ /g'> europarl.tagged.new.en

Next, we need to clean the corpus and remove long sentences. (Make sure you are in the same directory as the one where you have your europarl corpus)

$ perl (path to your mosesdecoder)/scripts/training/clean-corpus-n.perl europarl.tagged.new es en europarl.tag-clean 1 40
clean-corpus.perl: processing europarl-v6.es-en.es & .en to europarl.clean, cutoff 1-40
..........(100000)...

Input sentences: 1786594  Output sentences:  1467708


We're going to cut off the bottom 67,658 for testing (also because Giza++ segfaults somewhere around there).

$ mkdir testing
$ tail -67658 europarl.tag-clean.en > testing/europarl.tag-clean.67658.en
$ tail -67658 europarl.tagged.es > testing/europarl.tag-clean.67658.es
$  head -1400000 europarl.tag-clean.en > europarl.tag-clean.en.new
$  head -1400000 europarl.tag-clean.es > europarl.tag-clean.es.new
$  mv europarl.tag-clean.en.new europarl.tag-clean.en
$  mv europarl.tag-clean.es.new europarl.tag-clean.es


These files are:

  • europarl.tag-clean.en: The tagged source language side of the corpus
  • europarl.tag-clean.es: The tagged target language side of the corpus

Check that they have the same length:

$ wc -l europarl.*
   1400000 europarl.tag-clean.en
   1400000 europarl.tag-clean.es
   2800000 total

Align corpus

Now we've got the corpus files ready, we can align the corpus using the Moses scripts:

nohup perl (path to your mosesdecoder)/scripts/training/train-model.perl -external-bin-dir \
 ~/smt/local/bin -corpus europarl.tag-clean \
 -f en -e es -alignment grow-diag-final-and -reordering msd-bidirectional-fe \
 -lm 0:5:/home/fran/corpora/europarl/europarl.lm:0 >log 2>&1 &

Note: Remember to change all the paths in the above command!

You'll need an LM file, but you can copy it from a previous Moses installation. If you don't have one, make an empty file and put a few words in it. We won't be using the LM anyway.

This takes a while, from a few hours to a day. So leave it running and go and make a soufflé, or chop some wood or something.

Extract sentences

The first thing we need to do after Moses has finished training is convert the Giza++ alignments to a less human- (and machine-) hostile format:

$ zcat giza.en-es/en-es.A3.final.gz | ~/source/apertium-lex-tools/scripts/giza-to-moses.awk > europarl.phrasetable.en-es

Then we want to make sure again that our file has the right number of lines:

$ wc -l europarl.phrasetable.en-es
1400000 europarl.phrasetable.en-es

Then we want to extract the sentences where the target language word aligned to a source language word is a possible translation in the bilingual dictionary:

$ ~/source/apertium-lex-tools/scripts/extract-sentences.py europarl.phrasetable.en-es europarl.biltrans-tok.en-es \
  > europarl.candidates.en-es

These are basically sentences that we can hope that Apertium might be able to generate.

Extract bilingual dictionary candidates

Using the phrasetable and the bilingual file we can extract candidates for the bilingual dictionary.

python3 ~/Apertium/apertium-lex-tools/scripts/extract-biltrans-candidates.py europarl.phrasetable.en-es europarl.biltrans-tok.en-es > europarl.biltrans-candidates.en-es 2> europarl.biltrans-pairs.en-es

where europarl.biltrans-candidates.en-es contains the generated entries for the bilingual dictionary.

Extract frequency lexicon

The next step is to extract the frequency lexicon.

$ python ~/source/apertium-lex-tools/scripts/extract-freq-lexicon.py europarl.candidates.en-es > europarl.lex.en-es

This file should look like:

$ cat europarl.lex.en-es  | head 
31381 union<n> unión<n> @
101 union<n> sindicato<n>
1 union<n> situación<n>
1 union<n> monetario<adj>
4 slope<n> pendiente<n> @
1 slope<n> ladera<n>

Where the highest frequency translation is marked with an @.

Note: This frequency lexicon can be used as a substitute for "choosing the most general translation" in your bilingual dictionary.

Generate patterns

Now we generate the ngrams that we are going to generate the rules from.

$ crisphold=1.5 # ratio of how many times you see the alternative translation compared to the default
$ python ~/source/apertium-lex-tools/scripts/ngram-count-patterns.py europarl.lex.en-es europarl.candidates.en-es $crisphold 2>/dev/null > europarl.ngrams.en-es

This script outputs lines in the following format:

-language<n>	and<cnjcoo> language<n> ,<cm>	lengua<n>	2
+language<n>	plain<adj> language<n> ,<cm>	lenguaje<n>	3
-language<n>	language<n> knowledge<n>	lengua<n>	4
-language<n>	language<n> of<pr> communication<n>	lengua<n>	3
-language<n>	Community<adj> language<n> .<sent>	lengua<n>	5
-language<n>	language<n> in~addition~to<pr> their<det><pos>	lengua<n>	2
-language<n>	every<det><ind> language<n>	lengua<n>	2
+language<n>	and<cnjcoo> *understandable language<n>	lenguaje<n>	2
-language<n>	two<num> language<n>	lengua<n>	8
-language<n>	only<adj> official<adj> language<n>	lengua<n>	2

The + and - indicate if this line chooses the most frequent transation (-) or a translation which is not the most frequent (+). The pattern selecting the translation is then shown, followed by the translation and then the frequency.

Filter rules

Now you can filter the rules, for example by removing rules with conjunctions, or removing rules with unknown words.

Generate rules

The final stage is to generate the rules,

python3 ~/source/apertium-lex-tools/scripts/ngrams-to-rules.py europarl.ngrams.en-es $crisphold > europarl.ngrams.en-es.lrx

Process script

For the whole process you can run the following script:

CORPUS_DIR="/home/philip/Apertium/corpora/raw/europarl-fr-es"
CORPUS="Europarl3"
PAIR="es-fr"
SL="fr"
TL="es"
LEX_TOOLS="/home/philip/Apertium/apertium-lex-tools"
SCRIPTS="$LEX_TOOLS/scripts"
MOSESDECODER="/home/philip/Apertium/mosesdecoder/scripts/training"
TRAINING_LINES=50000
DATA="/home/philip/Apertium/apertium-fr-es"
BIN_DIR="/home/philip/Apertium/smt/bin"

# CLEAN CORPUS
perl "$MOSESDECODER/clean-corpus-n.perl" $CORPUS.$PAIR $SL $TL "$CORPUS.clean" 1 40;

#TAG CORPUS
cat "$CORPUS.clean.$SL" | tail -n $TRAINING_LINES | apertium -d "$DATA" $SL-$TL-tagger \
	| apertium-pretransfer > $CORPUS.tagged.$SL;

cat "$CORPUS.clean.$TL" | tail -n $TRAINING_LINES | apertium -d "$DATA" $TL-$SL-tagger \
	| apertium-pretransfer > $CORPUS.tagged.$TL;

cat "$CORPUS.tagged.$SL" | lt-proc -b "$DATA/$SL-$TL.autobil.bin" > $CORPUS.biltrans.$PAIR

N=`wc -l $CORPUS.clean.$SL | cut -d ' ' -f 1`

# REMOVE LINES WITH NO ANALYSES
seq 1 $N > $CORPUS.lines
paste $CORPUS.lines $CORPUS.tagged.$SL $CORPUS.tagged.$TL | grep '<' | cut -f1 > $CORPUS.lines.new
paste $CORPUS.lines $CORPUS.tagged.$SL $CORPUS.tagged.$TL | grep '<' | cut -f2 > $CORPUS.tagged.$SL.new
paste $CORPUS.lines $CORPUS.tagged.$SL $CORPUS.tagged.$TL | grep '<' | cut -f3 > $CORPUS.tagged.$TL.new

mv $CORPUS.lines.new $CORPUS.lines
mv $CORPUS.tagged.$SL.new $CORPUS.tagged.$SL
mv $CORPUS.tagged.$TL.new $CORPUS.tagged.$TL

# TRIM TAGS
cat $CORPUS.tagged.$SL | $LEX_TOOLS/process-tagger-output $DATA/$SL-$TL.autobil.bin -p -t > $CORPUS.tag-tok.$SL
cat $CORPUS.tagged.$TL | $LEX_TOOLS/process-tagger-output $DATA/$TL-$SL.autobil.bin -p -t > $CORPUS.tag-tok.$TL
cat $CORPUS.tagged.$SL | $LEX_TOOLS/process-tagger-output $DATA/$SL-$TL.autobil.bin -b -t > $CORPUS.biltrans-tok.$PAIR

# ALIGN
perl $MOSESDECODER/train-model.perl -external-bin-dir "$BIN_DIR" -corpus $CORPUS.tag-tok \
 -f $TL -e $SL -alignment grow-diag-final-and -reordering msd-bidirectional-fe \
-lm 0:5:/home/philip/Apertium/gsoc2013/giza/europarl.lm:0 2>&1

# EXTRACT
zcat giza.$SL-$TL/$SL-$TL.A3.final.gz | $SCRIPTS/giza-to-moses.awk > $CORPUS.phrasetable.$SL-$TL

# SENTENCES
python3 $SCRIPTS/extract-sentences.py $CORPUS.phrasetable.$SL-$TL $CORPUS.biltrans-tok.$PAIR \
  > $CORPUS.candidates.$SL-$TL 2>/dev/null

# FREQUENCY LEXICON
python $SCRIPTS/extract-freq-lexicon.py $CORPUS.candidates.$SL-$TL > $CORPUS.lex.$SL-$TL 2>/dev/null


# BILTRANS CANDIDATES
python3 $SCRIPTS/extract-biltrans-candidates.py $CORPUS.phrasetable.$SL-$TL $CORPUS.biltrans-tok.$PAIR \
  > $CORPUS.biltrans-entries.$SL-$TL 2>$CORPUS.biltrans-pairs.$SL-$TL

# NGRAM PATTERNS
$crisphold=1.5
python $SCRIPTS/ngram-count-patterns.py $CORPUS.lex.$SL-$TL $CORPUS.candidates.$SL-$TL $crisphold 2>/dev/null > $CORPUS.ngrams.$SL-$TL

# FILTER PATTERNS


# NGRAMS TO RULES
$cripshold=1.5
python3 $SCRIPTS/ngrams-to-rules.py europarl.ngrams.en-es $crisphold > $Europarl3.ngrams.$SL-TL.lrx