Difference between revisions of "Using weights for ambiguous rules"
Purplemoon (talk | contribs) |
Purplemoon (talk | contribs) |
||
Line 27: | Line 27: | ||
5) Install Kazakh segmenter from https://github.com/diasks2/pragmatic_segmenter/tree/kazakh |
5) Install Kazakh segmenter from https://github.com/diasks2/pragmatic_segmenter/tree/kazakh |
||
For using pragmatic_segmenter you need to do the following steps |
*[[For using pragmatic_segmenter you need to do the following steps]] |
||
**[[downloading ruby2.3]] |
|||
**[[gem install pragmatic_segmenter]] |
|||
**[[inside code you should use it like "ruby2.3 kazSentenceTokenizer.rb"]] |
|||
*[[This piece of code uses the segmenter to segment a corpus file and output the segmented sentences in a file]] |
|||
<pre> |
<pre> |
Revision as of 18:35, 30 October 2018
Installation • Resources • Contact • Documentation • Development • Tools |
Contents
The Idea
The idea is to allow Old-Apertium transfer rules to be ambiguous, i.e., allow a set of rules to match the same general input pattern, as opposed to the existed situation when the first rule in xml transfer file takes exclusive precedence and blocks out all its ambiguous peers during transfer precompilation stage. To decide which rule applies, transfer module would use a set of predefined or pretrained — more specific — weighted patterns provided for each group of ambiguous rules. This way, if a specific pattern matches, the rule with the highest weight for that pattern is applied.
The first rule in xml transfer file that matches the general pattern is still considered the default one and is applied if no weighted patterns matched.
How to use apertium-kaz-tur-mt for your language pair
1) Download a Wikipedia dump from http://dumps.wikimedia.org
$ wget https://dumps.wikimedia.org/kkwiki/latest/kkwiki-latest-pages-articles.xml.bz2
2) Extract the text using WikiExtractor:
$ wget https://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/WikiExtractor.py $ python3 WikiExtractor.py --infn kkwiki-latest-pages-articles.xml.bz2
4) Insert wiki.txt which has been extacted into the project directory.
5) Install Kazakh segmenter from https://github.com/diasks2/pragmatic_segmenter/tree/kazakh
require 'pragmatic_segmenter' File.open(ARGV[0]).each do |line1| ps = PragmaticSegmenter::Segmenter.new(text: line1, language: 'kk', doc_type: 'txt') sentences = ps.segment File.open(ARGV[1], "a") do |line2| sentences.each { |sentence| line2.puts sentence } end end
6- Install and build kenlm for training Turkish corpus and score the sentences from this link https://kheafield.com/code/kenlm/, then you need doing the following steps:
a- Downloading big Turkish corpus from wikidumps https://dumps.wikimedia.org/trwikinews/20181020/trwikinews-20181020-pages-articles.xml.bz2
b- Train kenlm using big Turkish corpus by 5-gram language model, and with the following commands:
c- Estimating running bin/lmplz -o 5 <text >text.arpa
d- Querying will generate binary file by bin/build_binary text.arpa text.binary
e- Python scripts(exampleken1, kenlm.pyx, genalltra.py) used to score sentences living here https://github.com/sevilaybayatli/apertium-kaz-tur-mt/tree/master/scripts, these scripts automatically doing its function.
f. You have to add the path of text.binary inside exampleken1.
7. Download and compile Yasmet by following the instruction here:
a) Download yasmet else from https://www-i6.informatik.rwth-aachen.de/web/Software/YASMET.html or https://github.com/apertium/apertium-lex-tools/blob/master/yasmet.cc
b) Build/compile g++ -o yasmet yasmet.cc
c) Execution by ./yasmet
8. You need apertium and language pair installed to use language modules inside code, the steps below just showing how the rest apertium modules used inside the code
a) Apertium tool "biltrans" on the segmented sentences.
apertium -d $HOME/apertium-kaz-tur kaz-tur-biltrans input_file output_file
b) Apertium tool "lextor" on the output of the biltrans.
lrx-proc -m $HOME/apertium-kaz-tur/kaz-tur.autolex.bin inFilePath > outFilePath
c) Apertium tool "interchunk" to that file.
apertium-interchunk $HOME/apertium-kaz-tur/apertium-kaz-tur.kaz-tur.t2x $HOME/apertium-kaz-tur/kaz-tur.t2x.bin input_file output_file
e) Apertium tool "postchunk" to the "interchunk" output file.
apertium-postchunk $HOME/apertium-kaz-tur/apertium-kaz-tur.kaz-tur.t3x $HOME/apertium-kaz-tur/kaz-tur.t3x.bin input_file output_file
f) Apertium tool "transfer" to the "postchunk" output file.
apertium-transfer -n $HOME/apertium-kaz-tur/apertium-kaz-tur.kaz-tur.t4x $HOME/apertium-kaz-tur/kaz-
tur.t4x.bin input_file | lt-proc -g $HOME/apertium-kaz-tur/kaz-tur.autogen.bin | lt-proc -p $HOME/apertium-
kaz-tur/kaz-tur.autopgen.bin > output_file.
9)Build/Compile system have done with this command:
a) ./autogen.sh
b) ./configure
c) make
d) training ./machine-translation input-file output-file
e) testing done by ./machine-translation input-file output-file number-of-beam
input-file= source language(Kazakh), output-file= target language(Turkish), and number of beam= 8 or any number.
Note: You can find the final result inside results/beamResults.txt.
Project working stages
1) Download a Wikipedia dump from http://dumps.wikimedia.org
$ wget https://dumps.wikimedia.org/kkwiki/latest/kkwiki-latest-pages-articles.xml.bz2
2) Extract the text using WikiExtractor:
$ wget https://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/WikiExtractor.py $ python3 WikiExtractor.py --infn kkwiki-latest-pages-articles.xml.bz2 https://dumps.wikimedia.org/kkwiki/latest/kkwiki-latest-pages-articles.xml.bz2
3) Remove some unnecessary characters ( " , < , > , | , $ , / , \ , ( , ) , etc. )
that cause Apertium tools to stop or malfunction.
4) Break corpus into sentences using paragmatic segmenter. This piece of code uses the segmenter to segment a corpus file and output the segmented sentences in a file.
require 'pragmatic_segmenter' File.open(ARGV[0]).each do |line1| ps = PragmaticSegmenter::Segmenter.new(text: line1, language: 'kk', doc_type: 'txt') sentences = ps.segment File.open(ARGV[1], "a") do |line2| sentences.each { |sentence| line2.puts sentence } end end
We call this piece of code in our program by :
ruby2.3 kazSentenceTokenizer.rb input_file output_file
5) Apply apertium tool "biltrans" on the segmented sentences.
apertium -d $HOME/apertium-kaz-tur kaz-tur-biltrans input_file output_file
6) Apply apertium tool "lextor" on the output of the biltrans.
lrx-proc -m $HOME/apertium-kaz-tur/kaz-tur.autolex.bin inFilePath > outFilePath
7) Load the output of the "lextor" - each line as string - in a vector data structure in our program.
8) Split each "biltrans" sentence into source and target tokens and tags.
9) Match the source tags with their categories in the transfer file.
10) From the matched tags, match the applied rules.
11) Apply the applied rules, with taking care of the multiple -ambiguous- rules applied to the same word/s
12) Get the all the combination outputs from applied rules.
13) Write these outputs on a file, then apply apertium tool "interchunk" to that file.
apertium-interchunk $HOME/apertium-kaz-tur/apertium-kaz-tur.kaz-tur.t2x $HOME/apertium-kaz-tur/kaz-tur.t2x.bin input_file output_file
14) Apply apertium tool "postchunk" to the "interchunk" output file.
apertium-postchunk $HOME/apertium-kaz-tur/apertium-kaz-tur.kaz-tur.t3x $HOME/apertium-kaz-tur/kaz-tur.t3x.bin input_file output_file
15) Apply apertium tool "transfer" to the "postchunk" output file.
apertium-transfer -n $HOME/apertium-kaz-tur/apertium-kaz-tur.kaz-tur.t4x $HOME/apertium-kaz-tur/kaz-
tur.t4x.bin input_file | lt-proc -g $HOME/apertium-kaz-tur/kaz-tur.autogen.bin | lt-proc -p $HOME/apertium-
kaz-tur/kaz-tur.autopgen.bin > output_file
16) "transfer" output is the target -translation- sentence. We then get the scores of these sentences from the language model.
17) For given source sentence , there are one or more target sentence , each with a score now. We normalize their scores to make their sum = 1.
18) We then prepare the yasmet files , and then train them and get yasmet models.
19) These models are used to get weights for the ambiguous rules in beam search.
The output of the beam search is not optimal , but it gives one good translation as output.
20) The project living here https://github.com/sevilaybayatli/apertium-kaz-tur-mt
21) Build/Compile system have done with this command:
a) ./autogen.sh
b) ./configure
c) make
d) trining ./machine-translation inputfile outputfile
e) testing done by ./machine-translation inputfile outputfile number of beam
inputfile=kazak-text, outputfile= it's Turkish translation, and number of beam=8 or any number.
Implementation
We have created transfer-module by using the old transfer-module and rest of apertium tools such as lexical transfer, lexical selection, and morphological generator. We made a module by using c++ that translate texts from Kazakh to Turkish. This module try to give the best Turkish translation for Kazakh by applying advanced algorithms and methods.
Step 1
We have a very big corpuses (wiki dumps) with size 640 MB and 320 MB of wiki texts. Since our application takes a sentence as input, we must split our corpus into sentences. First, we process the corpus if it precedes a sentence with capital letter and remove the latin alphabets from the corpus. We then applied a rule-based sentence boundary detection tool called “pragmatic segmenter”https://github.com/diasks2/pragmatic_segmenter/tree/kazakh.
For using pragmatic_segmenter you need do the following steps:
1- downloading Ruby
2- gem install pragmatic_segmenter
3- inside code you should use it like "ruby2.3 kazSentenceTokenizer.rb"
Step 2
First of all, we take that sentence and give it to the rest of apertium tools biltrans and lextor to get a string of tokens (words) each with its translations and part of speech tags. Now this is will be the real input to our program, we first split these strings into source and target tokens along with there tags, then we try to match these tags with categories from the transfer file as these matches will help us match the tokens to the rules. Second, it was to apply these rules on the matched tokens. If different rules are applied to one token, then we have ambiguity with that word, so we must decide which one to use. And if many tokens have ambiguities that makes the whole sentence has much more ambiguity, as all the possible combinations are equal the multiplication of each number of ambiguous rules of each token. Our output for this step was to output all the possible combinations of translations of the sentence along with their analysis (output of the rules).
Step 3
After get all possible translations of every combination we scored them(their sum = 1) by using language model. In this project we have used KenLM Language Model Toolkit https://kheafield.com/code/kenlm/. Language Model applied on target language Turkish.
You should follow steps here to work with the language model:
1- First download and install kenlm language model form https://kheafield.com/code/kenlm/
2- Downloading big Turkish corpus from wikidumps https://dumps.wikimedia.org/trwikinews/20181020/trwikinews-20181020-pages-articles.xml.bz2
3- Train kenlm using big Turkish corpus by 5-gram language model, and with the following commands:
a- Estimating running bin/lmplz -o 5 <text >text.arpa
b- Querying will generate binary file by bin/build_binary text.arpa text.binary
4- You should have either python2 or python3 and add this line to code"python2 $HOME/Normalisek/exampleken1.py <"
Step 4
There was a required format to obtain to use it as dataset for an unsupervised machine learner (YASMET). Every dataset will be for a certain pattern that ambiguous rules applied to, where the features will be the different words matched with these patterns, along with the rules number and their precalculated weight. The challenges for making that format was the need to modify and introduce new data-structures into the old code, which was not an easy task. But after finishing it successfully, we now are in the step of translating that data into a table and then feed it to the learner.
Step 5
Download and compile Yasmet by following the instruction here:
a) Download yasmet else from https://www-i6.informatik.rwth-aachen.de/web/Software/YASMET.html or https://github.com/apertium/apertium-lex-tools/blob/master/yasmet.cc
b) Build/compile g++ -o yasmet yasmet.cc
c) Execution by ./yasmet
Learner:
Let we have sentence with words w1 w2 w3 w4 w5.
Where rules r1 , r2 , r3 , r4 , r5 applied on w1 w2 w3 as follows :
r1 applied on => w1 w2 w3
r2 applied on => w1 w2
r3 applied on => w1
r4 applied on => w2
r5 applied on => w3
And rules r6 , r7 applied on w4 w5 as follows :
r6 applied on => w4 w5
r7 applied on => w4 w5
So we now have 3*2 possible translations for that sentence with the ambiguous rules applied and with their normalized scores as follows :
r1 - r6 => 0.2
r1 - r7 => 0.1
r2 - r5 - r6 => 0.1
r2 - r5 - r7 => 0.3
r3 - r4 - r5 - r6 => 0.1
r3 - r4 - r5 - r7 => 0.2
Next we prepare the format for the yasmet files. By first calculating the scores of each rule/s applied to the same words by accumulating them from the normalized scores , as follows :
r1 => 0.1+0.2 = 0.3
r2-r5 => 0.1+0.3 = 0.4
r3-r4-r5 => 0.1+.02 = 0.3
r6- => 0.2+0.1+0.1 = 0.4
r7 => 0.1+0.3+0.2 = 0.6
So the yasmet format for file (r1+r2-r5+r3-r4-r5) is :
3
0 $ 0.3 # w1_0:0 w2_1:0 w3_2:0 # w1_0:1 w2_1:1 w3_2:1 # w1_0:2 w2_1:2 w3_2:2 #
1 $ 0.4 # w1_0:0 w2_1:0 w3_2:0 # w1_0:1 w2_1:1 w3_2:1 # w1_0:2 w2_1:2 w3_2:2 #
2 $ 0.3 # w1_0:0 w2_1:0 w3_2:0 # w1_0:1 w2_1:1 w3_2:1 # w1_0:2 w2_1:2 w3_2:2 #
And the yasmet format for file (r6+r7) is :
2
0 $ 0.4 # w4_0:0 w5_1:0 # w4_0:1 w5_1:1 #
1 $ 0.6 # w4_0:0 w5_1:0 # w4_0:1 w5_1:1 #
And we do so for all the sentences , accumulating the yasmet data for each file. At the end we train a model for each file to use it after that to take the scores of such rules and use them in the beam-search algorithm.
We train the model "r6+r7.model" of the given yasmet file "r6+r7" by using the cmmand :
./yasmet < r6+r7 > r6+r7.model
The model "r6+r7.model" would be :
@@@CORRECTIVE-FEATURE@@@ 1
w4_0:0 score1
w5_1:0 score2
w4_0:1 score3
w5_1:1 score4
The following steps just apply on test data
Step 6
We got 100 new sentences form Wikipedia(wiki dumps)to test our system. These new data across all steps except learning step(YASMET). Learner just used during training.
Step 7
Applying beam Search algorithm:
Input :
- beam : beam size
- slTokens : source words indices
- ambigInfo : A data structure has all the ambiguous rules with their corresponding words indices.
- classesWeights : yasmet weights loaded from the model files onto a map.
Output :
- beamTree : the highest weights (beam size) translations of the given source words. Actually the tree has the highest rules indices along with their weights sum.
Algorithm :
- At first we get a set of ambiguous rules applied to some words , then we get the weight of these words for every rule from the yasmet weights.
- We build a new tree for these new words. The tree is just a vector of vectors of rules indices along with their weights sum.
For example let at any iteration we have a set of rules (r for rules and w for word) : r1 applied on => w1 w2 w3 r2 applied on => w1 w2 r3 applied on => w1 r4 applied on => w2 r5 applied on => w3
We then have 3 different translations for these 3 words.
We then build the tree as follows :
--------> r1 : weight1 --------> r2 - r5 : weight2 --------> r3 - r4 - r5 : weight3
- Then we expand our beamTree by the number of the rules we have and then merge the two tree. So if we have a beam tree say with 6 translations, then with the above tree we just built, we will expand our beamTree to have 6*3 = 18 translations and then merge the tree we just built with beam tree.
- Then we sort those 18 translations descendingly by their sum of weights.
- Then if we reduce our beamTree to have no more the beam size translations. So if the beam size = 8 , we will remove the least 10 translations from our tree.
- We then continue until we finish all the ambiguous rules and the output will be at last a tree with no more than the beam size translations.
- After that we get only the best translation and output it.