Difference between revisions of "Lexical feature transfer - Second report"

From Apertium
Jump to navigation Jump to search
Line 17: Line 17:
== First Model ==
== First Model ==


In the first model, the lemmas from the extracted nouns / verbs and preposition were used as one feature.
In the first model, the lemmas from the extracted nouns / verbs and preposition were used as one feature, and a NB classifier was used.


<pre>
<pre>
feature | label
feature1 | label
----------------------------------
----------------------------------
положба--на--пазар | of
положба--на--пазар | of
Line 28: Line 28:
власт--за--нерегуларност | for
власт--за--нерегуларност | for
</pre>
</pre>

This made the model quite complex, and every trigram from the testing which was not seen in the training set was discarded since and the model did not know what to do with it. Precision was high as expected, but only 1.800 lines out of 50.000 from the testing set were actually affected.

Revision as of 15:57, 26 July 2012

Review

In the first attempt at trying to solve the problem of corpus-based preposition selection, both a Naive Bayes and and SVM classifier were tried out. The lemmas and some of the tags of the surrounding words were extracted as features for the classifier. The source-language corpus was used to extract training examples from <n1> <pr> <n2> -> <n1> <pr> <n2> patterns, and the target-language corpus was used to label the extracted training examples.

Around 12.000 of the extracted examples were aligned to their target-language translations and labeled. There was some improvement in the translation quality, however, there were many wrong predictions as a result of the small training set and formatting errors in the training set.

Corpora, sets and alignment

The parallel corpora for the Macedonian - English pair, a total of 207.778 parallel sentences, can be downloaded from here:
http://www.nljubesic.net/resources/corpora/setimes/

The first 150.000 parallel sentences were used for extracting and aligning training examples, while the last 50.000 sentences were used for testing the model(s).

This time, the aligner was extended to match the pattern:

  <n | v> <pr> <adj | det>* <n | v>

First Model

In the first model, the lemmas from the extracted nouns / verbs and preposition were used as one feature, and a NB classifier was used.

feature1                  | label
----------------------------------
положба--на--пазар        | of
извор--од--влада          | from
кандидат--во--процес      | in
процес--на--приватизација | of
власт--за--нерегуларност  | for

This made the model quite complex, and every trigram from the testing which was not seen in the training set was discarded since and the model did not know what to do with it. Precision was high as expected, but only 1.800 lines out of 50.000 from the testing set were actually affected.