Difference between revisions of "Using weights for ambiguous rules"
Purplemoon (talk | contribs) |
Purplemoon (talk | contribs) |
||
(184 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
====The Idea==== |
|||
The idea is to allow Old-Apertium transfer rules to be ambiguous, i.e., allow a set of rules to match the same general input pattern, as opposed to the existed situation when the first rule in xml transfer file takes exclusive precedence and blocks out all its ambiguous peers during transfer precompilation stage. |
|||
To decide which rule applies, transfer module would use a set of predefined or pretrained — more specific — weighted patterns provided for each group of ambiguous rules. This way, if a specific pattern matches, the rule with the highest weight for that pattern is applied. |
|||
==The Idea== |
|||
The first rule in xml transfer file that matches the general pattern is still considered the default one and is applied if no weighted patterns matched. |
|||
The idea is to allow old apertium transfer rules to be ambiguous i.e. allow a set of rules to match the same general input pattern. This is more effective than the existing situation wherein the first rule in the XML transfer files takes exclusive precedence and blocks out all its ambiguous peers during the transfer precompilation stage, often leading to inaccurate translation. |
|||
To achieve this, the transfer module would use a set of predefined or pre-trained (more specific) weighted patterns provided for each group of ambiguous rules. This way, if a specific pattern matches with multiple transfer rules, the rule with the largest weight for that pattern is applied. |
|||
If no weighted patterns are matched, then the first rule in XML transfer file that matches the general pattern is still considered the default one and is applied. |
|||
The module (apertium-ambiguous) can be found at https://github.com/sevilaybayatli/apertium-ambiguous. |
|||
====Implementation==== |
|||
==Configure, build and install== |
|||
We have created transfer-module by using the old transfer-module and rest of apertium tools such as morphological analyser, morphological disambiguator, lexical transfer, lexical selection, morphological generator, and reformattor. We made a module by using c++ that translate texts from Kazakh to Turkish. This module try to give the best Turkish translation for Kazakh by applying advanced algorithms and methods. |
|||
<code>cd</code> to <b>apertium-ambiguous</b> before you run the commands is shown below |
|||
<pre> |
|||
=====Step 1===== |
|||
./autogen.sh |
|||
./configure |
|||
make |
|||
</pre> |
|||
==How to use apertium-ambiguous for your language pair== |
|||
First we take that sentence and give it to apertium tools biltrans and lextor to get a string of tokens (words) each with its translations and part of speech tags. |
|||
Now this is the real input to our program, we first split these strings into source and target tokens along with there tags, then we try to match these tags with categories from the transfer file as these matches will help us match the tokens to the rules. The second step is to apply these rules on the matched tokens. If different rules are applied to one token, then we have ambiguity with that word, so we must decide which one to use. And if many tokens have ambiguities that makes the whole sentence has much more ambiguity, as all the possible combinations are equal the multiplication of each number of ambiguous rules of each token. |
|||
Our output for that phase was to output all the possible combinations of translations of the sentence along with their analysis (output of the rules) , the final translation of every combination and finally the weight of each combination (their sum = 1) by using KenLM Language Model Toolkit. |
|||
For this tutorial, we will be using the language pair apertium-kaz-tur |
|||
=== |
===Download a wikimedia dump=== |
||
Download a Wikipedia dump from http://dumps.wikimedia.org: |
|||
=====Step 3===== |
|||
<pre> |
|||
=====Step 4===== |
|||
$ wget https://dumps.wikimedia.org/kkwiki/latest/kkwiki-latest-pages-articles.xml.bz2 |
|||
</pre> |
|||
To use any other language, simply replace the occurrences of 'kk' with the 2-letter code of your language. |
|||
=====Step 5===== |
|||
====Evaluation==== |
|||
Next, extract the text using WikiExtractor script: |
|||
<pre> |
|||
$ git clone https://github.com/apertium/WikiExtractor.git |
|||
$ cd WikiExtractor |
|||
$ python3 WikiExtractor.py --infn ../kkwiki-latest-pages-articles.xml.bz2 |
|||
</pre> |
|||
The extracted file will be named as <b>wiki.txt</b> in the current directory which you are already working on and you are going to use it with other steps of the project. |
|||
===Install segmenter=== |
|||
Install Kazakh segmenter from https://github.com/diasks2/pragmatic_segmenter/tree/kazakh |
|||
For using pragmatic_segmenter you need to do the following steps: |
|||
# Download Ruby 2.3 by running <code>sudo apt-get install ruby-full</code> |
|||
# Run <code>gem install pragmatic_segmenter</code> |
|||
This piece of code uses the segmenter to segment a corpus file and output the segmented sentences into a file. The <b>sentenceTokenizer.rb</b>, which is located at https://github.com/sevilaybayatli/apertium-ambiguous/blob/master/scripts |
|||
<pre> |
|||
require 'pragmatic_segmenter' |
|||
File.open(ARGV[1]).each do |line1| |
|||
line1.delete! ('\\\(\)\[\]\{\}\<\>\|\$\/\'\"') |
|||
ps = PragmaticSegmenter::Segmenter.new(text: line1, language: ARGV[0], doc_type: 'txt') |
|||
sentences = ps.segment |
|||
File.open(ARGV[2], "a") do |line2| |
|||
sentences.each { |sentence| line2.puts sentence } |
|||
end |
|||
end |
|||
</pre> |
|||
Breaking corpus into sentences using the ruby program <b>sentenceTokenizer.rb</b> built on the pragmatic segmenter. |
|||
<pre> |
|||
ruby2.3 sentenceTokenizer.rb $langCode $inputFile sentences.txt |
|||
For example: |
|||
ruby2.3 sentenceTokenizer.rb kk wiki.txt sentences.txt |
|||
</pre> |
|||
langcode for Kazakh <b>kk</b>, inputFile is <b>Kazakh corpus</b>, and sentences.txt is a <b>segmented sentences</b>. |
|||
===Apertium language pairs modules=== |
|||
You need apertium and the language pair installed for using language modules. The steps below just show how the apertium modules for getting desired output which will used by apertium-ambiguous. Apertium pair parent directory path(<b>apertium-kaz-tur</b>). If it's in your home directory then we expect <b>$HOME</b>. |
|||
To apply the apertium tool <b>biltrans</b> on the segmented sentences: |
|||
<pre> |
|||
apertium -d $pairPar/apertium-$pairCode $pairCode-biltrans sentences.txt biltrans.txt |
|||
For example |
|||
apertium -d $HOME/apertium-kaz-tur kaz-tur-biltrans sentences.txt biltrans.txt |
|||
</pre> |
|||
To apply the apertium tool <b>lextor</b> on the output of the biltrans: |
|||
<pre> |
|||
lrx-proc -m $HOME/apertium-kaz-tur/kaz-tur.autolex.bin inFilePath > outFilePath |
|||
For example |
|||
lrx-proc -m $HOME/apertium-kaz-tur/kaz-tur.autolex.bin biltrans.txt > lextor.txt |
|||
</pre> |
|||
To run <b>rules-applier</b> program |
|||
<pre> |
|||
./rules-applier localeId transferFile.t1x sentences.txt lextor.txt rulesOut.txt |
|||
For example |
|||
./rules-applier kk_KZ $HOME/transferFile.t1x sentences.txt lextor.txt rulesOut.txt |
|||
</pre> |
|||
<pre> |
|||
localeId= ICU localeId for the source language, sentences.txt= source language sentences, rulesOut.txt= output file of your results |
|||
</pre> |
|||
To apply the apertium tool <b>interchunk</b> into rulesOut.txt file: |
|||
<pre> |
|||
apertium-interchunk $pairPar/apertium-$pairCode/apertium-$pairCode.$pairCode.t2x $pairPar/apertium-$pairCode/$pairCode.t2x.bin rulesOut.txt interchunk.txt |
|||
For example |
|||
apertium-interchunk $HOME/apertium-kaz-tur/apertium-kaz-tur.kaz-tur.t2x $HOME/apertium-kaz-tur/kaz-tur.t2x.bin rulesOut.txt interchunk.txt |
|||
</pre> |
|||
To apply the apertium tool <b>postchunk</b> to the <b>interchunk</b> output file: |
|||
<pre> |
|||
apertium-postchunk $pairPar/apertium-$pairCode/apertium-$pairCode.$pairCode.t3x $pairPar/apertium-$pairCode/$pairCode.t3x.bin interchunk.txt postchunk.txt |
|||
For example |
|||
apertium-postchunk $HOME/apertium-kaz-tur/apertium-kaz-tur.kaz-tur.t3x $HOME/apertium-kaz-tur/kaz-tur.t3x.bin interchunk.txt postchunk.txt |
|||
</pre> |
|||
To apply the apertium tool <b>transfer</b> to the <b>postchunk</b> output file |
|||
# INPUT: Outputof the postchunk module |
|||
# OUTPUT: Morphologically generated sentences in the target language |
|||
<pre> |
|||
apertium-transfer -n $pairPar/apertium-$pairCode/apertium-$pairCode.$pairCode.t4x $pairPar/apertium-$pairCode/$pairCode.t4x.bin postchunk.txt | lt-proc -g $pairPar/apertium-$pairCode/$pairCode.autogen.bin | lt-proc -p $pairPar/apertium-$pairCode/$pairCode.autopgen.bin > transfer.txt |
|||
For example |
|||
apertium-transfer -n $HOME/apertium-kaz-tur/apertium-kaz-tur.kaz-tur.t4x $HOME/apertium-kaz-tur/kaz-tur.t4x.bin postchunk.txt | lt-proc -g $HOME/apertium-kaz-tur/kaz-tur.autogen.bin | lt-proc -p $HOME/apertium-kaz-tur/kaz-tur.autopgen.bin > transfer.txt |
|||
</pre> |
|||
===Install and build kenlm=== |
|||
Download and install kenlm by the following steps under 'USAGE' at https://kheafield.com/code/kenlm/ |
|||
Download a big Turkish corpus from wikidumps: |
|||
<pre> |
|||
$ wget https://dumps.wikimedia.org/trwikinews/20181020/trwikinews-20181020-pages-articles.xml.bz2. |
|||
</pre> |
|||
For training, you should follow these steps: |
|||
# <b>Estimating:</b> run <code>bin/lmplz -o 5 <text >text.arpa</code> |
|||
# <b>Querying:</b> run <code>bin/build_binary text.arpa text.binary</code> |
|||
Python script (score-sentences.py) used to score target language's sentences with language model, it can be found at https://github.com/sevilaybayatli/apertium-ambiguous/tree/master/scripts. |
|||
For running model weight program on the transfer file as it is explained in bash script by this command: |
|||
<pre> |
|||
python score-sentences.py arpa_or_binary_LM_file target_lang_file weights_file |
|||
For example |
|||
python2 score-sentences.py text.binary target-sentences.txt weights.txt |
|||
</pre> |
|||
===Install and build yasmet=== |
|||
Downloading and compiling yasmet by doing the following: |
|||
Download yasmet either from https://www-i6.informatik.rwth-aachen.de/web/Software/YASMET.html |
|||
or from https://github.com/apertium/apertium-lex-tools/blob/master/yasmet.cc |
|||
To build and compile, follow steps below: |
|||
# <code>g++ -o yasmet yasmet.cc</code> |
|||
# <code>./yasmet</code> |
|||
(If the compilation doesn't work, try: |
|||
#g++ -o yasmet yasmet.cc -std=gnu++98 |
|||
) |
|||
<pre> |
|||
$mv yasmet ./apertium-ambiguous |
|||
</pre> |
|||
Running yasmet-formatter to prepare yasmet datasets. Also this will generate the analysis output file , beside the best model weighting translations(scoring with language model) in file <b>modelWeight.txt</b>, and random translations(choosing applying rule randomly form transfer file) in file <b>randomWeight.txt</b>. |
|||
To apply yasmet-formatter program |
|||
<pre> |
|||
./yasmet-formatter $localeId transferFile.t1x sentences.txt lextor.txt transfer.txt weights.txt $outputFile $datasets |
|||
For example |
|||
./yasmet-formatter kk_KZ transferFile.t1x sentences.txt lextor.txt transfer.txt weights.txt sentences.out datasets |
|||
</pre> |
|||
===Training and Testing apertium-ambiguous=== |
|||
<b>Training</b> |
|||
<pre> |
|||
./yasmet-formatter icu-locale-id transfer-file-path sentences-file-path lextor-file-path transfer-out-file-path(postchunk2-out) model-weights output-file-path datasets-folder-name |
|||
For example |
|||
./yasmet-formatter kk_KZ transferFile.t1x test.txt lextor.txt transfer.txt weights.txt test.out datasets |
|||
</pre> |
|||
<b>Generate-yasmet-models.sh</b> |
|||
Generate the yasmet models form yasmet datasets either using bash file or doing it manually, actually by running one of the commands. |
|||
<pre> |
|||
Either |
|||
bash generate-yasmet-models.sh datasets models |
|||
or |
|||
./yasmet < dataset-path > model-path |
|||
</pre> |
|||
<b>Testing</b> |
|||
Running beam search with beam = beam_number in the sentencesFile , writing its results into file "BeamSearch-k.txt". |
|||
<pre> |
|||
./beam-search localeId transferFile.tx1 sentencesFile lextorFile transferOutFile modelsFolder k1 k2 k3 ... |
|||
For example |
|||
./beam-search kk_KZ transferFile.t1x test.txt lextor.txt transfer.txt models 2 4 8 10 |
|||
</pre> |
|||
<pre> |
|||
test.txt= test.text(source language text(Kazakh)), output-file= BeamSearch-2.txt, BeamSearch-4.txt.., and k= 2 4 8 10 or any number. |
|||
</pre> |
|||
Enjoy using apertium-ambiguous :) |
|||
[[Category:Documentation in English]] |
Latest revision as of 13:19, 17 May 2019
Contents
The Idea[edit]
The idea is to allow old apertium transfer rules to be ambiguous i.e. allow a set of rules to match the same general input pattern. This is more effective than the existing situation wherein the first rule in the XML transfer files takes exclusive precedence and blocks out all its ambiguous peers during the transfer precompilation stage, often leading to inaccurate translation. To achieve this, the transfer module would use a set of predefined or pre-trained (more specific) weighted patterns provided for each group of ambiguous rules. This way, if a specific pattern matches with multiple transfer rules, the rule with the largest weight for that pattern is applied.
If no weighted patterns are matched, then the first rule in XML transfer file that matches the general pattern is still considered the default one and is applied.
The module (apertium-ambiguous) can be found at https://github.com/sevilaybayatli/apertium-ambiguous.
Configure, build and install[edit]
cd
to apertium-ambiguous before you run the commands is shown below
./autogen.sh ./configure make
How to use apertium-ambiguous for your language pair[edit]
For this tutorial, we will be using the language pair apertium-kaz-tur
Download a wikimedia dump[edit]
Download a Wikipedia dump from http://dumps.wikimedia.org:
$ wget https://dumps.wikimedia.org/kkwiki/latest/kkwiki-latest-pages-articles.xml.bz2
To use any other language, simply replace the occurrences of 'kk' with the 2-letter code of your language.
Next, extract the text using WikiExtractor script:
$ git clone https://github.com/apertium/WikiExtractor.git $ cd WikiExtractor $ python3 WikiExtractor.py --infn ../kkwiki-latest-pages-articles.xml.bz2
The extracted file will be named as wiki.txt in the current directory which you are already working on and you are going to use it with other steps of the project.
Install segmenter[edit]
Install Kazakh segmenter from https://github.com/diasks2/pragmatic_segmenter/tree/kazakh
For using pragmatic_segmenter you need to do the following steps:
- Download Ruby 2.3 by running
sudo apt-get install ruby-full
- Run
gem install pragmatic_segmenter
This piece of code uses the segmenter to segment a corpus file and output the segmented sentences into a file. The sentenceTokenizer.rb, which is located at https://github.com/sevilaybayatli/apertium-ambiguous/blob/master/scripts
require 'pragmatic_segmenter' File.open(ARGV[1]).each do |line1| line1.delete! ('\\\(\)\[\]\{\}\<\>\|\$\/\'\"') ps = PragmaticSegmenter::Segmenter.new(text: line1, language: ARGV[0], doc_type: 'txt') sentences = ps.segment File.open(ARGV[2], "a") do |line2| sentences.each { |sentence| line2.puts sentence } end end
Breaking corpus into sentences using the ruby program sentenceTokenizer.rb built on the pragmatic segmenter.
ruby2.3 sentenceTokenizer.rb $langCode $inputFile sentences.txt For example: ruby2.3 sentenceTokenizer.rb kk wiki.txt sentences.txt
langcode for Kazakh kk, inputFile is Kazakh corpus, and sentences.txt is a segmented sentences.
Apertium language pairs modules[edit]
You need apertium and the language pair installed for using language modules. The steps below just show how the apertium modules for getting desired output which will used by apertium-ambiguous. Apertium pair parent directory path(apertium-kaz-tur). If it's in your home directory then we expect $HOME.
To apply the apertium tool biltrans on the segmented sentences:
apertium -d $pairPar/apertium-$pairCode $pairCode-biltrans sentences.txt biltrans.txt For example apertium -d $HOME/apertium-kaz-tur kaz-tur-biltrans sentences.txt biltrans.txt
To apply the apertium tool lextor on the output of the biltrans:
lrx-proc -m $HOME/apertium-kaz-tur/kaz-tur.autolex.bin inFilePath > outFilePath For example lrx-proc -m $HOME/apertium-kaz-tur/kaz-tur.autolex.bin biltrans.txt > lextor.txt
To run rules-applier program
./rules-applier localeId transferFile.t1x sentences.txt lextor.txt rulesOut.txt For example ./rules-applier kk_KZ $HOME/transferFile.t1x sentences.txt lextor.txt rulesOut.txt
localeId= ICU localeId for the source language, sentences.txt= source language sentences, rulesOut.txt= output file of your results
To apply the apertium tool interchunk into rulesOut.txt file:
apertium-interchunk $pairPar/apertium-$pairCode/apertium-$pairCode.$pairCode.t2x $pairPar/apertium-$pairCode/$pairCode.t2x.bin rulesOut.txt interchunk.txt For example apertium-interchunk $HOME/apertium-kaz-tur/apertium-kaz-tur.kaz-tur.t2x $HOME/apertium-kaz-tur/kaz-tur.t2x.bin rulesOut.txt interchunk.txt
To apply the apertium tool postchunk to the interchunk output file:
apertium-postchunk $pairPar/apertium-$pairCode/apertium-$pairCode.$pairCode.t3x $pairPar/apertium-$pairCode/$pairCode.t3x.bin interchunk.txt postchunk.txt For example apertium-postchunk $HOME/apertium-kaz-tur/apertium-kaz-tur.kaz-tur.t3x $HOME/apertium-kaz-tur/kaz-tur.t3x.bin interchunk.txt postchunk.txt
To apply the apertium tool transfer to the postchunk output file
- INPUT: Outputof the postchunk module
- OUTPUT: Morphologically generated sentences in the target language
apertium-transfer -n $pairPar/apertium-$pairCode/apertium-$pairCode.$pairCode.t4x $pairPar/apertium-$pairCode/$pairCode.t4x.bin postchunk.txt | lt-proc -g $pairPar/apertium-$pairCode/$pairCode.autogen.bin | lt-proc -p $pairPar/apertium-$pairCode/$pairCode.autopgen.bin > transfer.txt For example apertium-transfer -n $HOME/apertium-kaz-tur/apertium-kaz-tur.kaz-tur.t4x $HOME/apertium-kaz-tur/kaz-tur.t4x.bin postchunk.txt | lt-proc -g $HOME/apertium-kaz-tur/kaz-tur.autogen.bin | lt-proc -p $HOME/apertium-kaz-tur/kaz-tur.autopgen.bin > transfer.txt
Install and build kenlm[edit]
Download and install kenlm by the following steps under 'USAGE' at https://kheafield.com/code/kenlm/
Download a big Turkish corpus from wikidumps:
$ wget https://dumps.wikimedia.org/trwikinews/20181020/trwikinews-20181020-pages-articles.xml.bz2.
For training, you should follow these steps:
- Estimating: run
bin/lmplz -o 5 <text >text.arpa
- Querying: run
bin/build_binary text.arpa text.binary
Python script (score-sentences.py) used to score target language's sentences with language model, it can be found at https://github.com/sevilaybayatli/apertium-ambiguous/tree/master/scripts.
For running model weight program on the transfer file as it is explained in bash script by this command:
python score-sentences.py arpa_or_binary_LM_file target_lang_file weights_file For example python2 score-sentences.py text.binary target-sentences.txt weights.txt
Install and build yasmet[edit]
Downloading and compiling yasmet by doing the following:
Download yasmet either from https://www-i6.informatik.rwth-aachen.de/web/Software/YASMET.html or from https://github.com/apertium/apertium-lex-tools/blob/master/yasmet.cc
To build and compile, follow steps below:
g++ -o yasmet yasmet.cc
./yasmet
(If the compilation doesn't work, try:
- g++ -o yasmet yasmet.cc -std=gnu++98
)
$mv yasmet ./apertium-ambiguous
Running yasmet-formatter to prepare yasmet datasets. Also this will generate the analysis output file , beside the best model weighting translations(scoring with language model) in file modelWeight.txt, and random translations(choosing applying rule randomly form transfer file) in file randomWeight.txt.
To apply yasmet-formatter program
./yasmet-formatter $localeId transferFile.t1x sentences.txt lextor.txt transfer.txt weights.txt $outputFile $datasets For example ./yasmet-formatter kk_KZ transferFile.t1x sentences.txt lextor.txt transfer.txt weights.txt sentences.out datasets
Training and Testing apertium-ambiguous[edit]
Training
./yasmet-formatter icu-locale-id transfer-file-path sentences-file-path lextor-file-path transfer-out-file-path(postchunk2-out) model-weights output-file-path datasets-folder-name For example ./yasmet-formatter kk_KZ transferFile.t1x test.txt lextor.txt transfer.txt weights.txt test.out datasets
Generate-yasmet-models.sh
Generate the yasmet models form yasmet datasets either using bash file or doing it manually, actually by running one of the commands.
Either bash generate-yasmet-models.sh datasets models or ./yasmet < dataset-path > model-path
Testing
Running beam search with beam = beam_number in the sentencesFile , writing its results into file "BeamSearch-k.txt".
./beam-search localeId transferFile.tx1 sentencesFile lextorFile transferOutFile modelsFolder k1 k2 k3 ... For example ./beam-search kk_KZ transferFile.t1x test.txt lextor.txt transfer.txt models 2 4 8 10
test.txt= test.text(source language text(Kazakh)), output-file= BeamSearch-2.txt, BeamSearch-4.txt.., and k= 2 4 8 10 or any number.
Enjoy using apertium-ambiguous :)