Difference between revisions of "User:Francis Tyers/Sandbox2"
Jump to navigation
Jump to search
(7 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
Constraint-based lexical selection for rule-based machine translation |
|||
<pre> |
<pre> |
||
Corpus: cawiki-20110616-pages-articles.xml.bz2 |
Corpus: cawiki-20110616-pages-articles.xml.bz2 |
||
Line 17: | Line 19: | ||
Test corpus: |
Test corpus: |
||
* 150 test words |
|||
* |
* 1,500 sentences |
||
* 10 per test word |
* 10 per test word |
||
* Randomly selected from the subset of sentences which were found in the corpus. |
* Randomly selected from the subset of sentences which were found in the corpus. |
||
* Only words with >100 example sentences included |
* Only words with >100 example sentences included |
||
⚫ | |||
* Discarding because of bad tagging/MWE recognition: 'to', 'sol', 'portada', 'cap', 'cop', 'marxa' (less than 60% correct) |
|||
Baseline: |
|||
Training corpus: |
|||
⚫ | |||
*: This would require a parallel corpus. |
|||
Baselines: |
|||
Rationale: |
|||
* TL Frequency-best |
|||
⚫ | |||
* TLM-best |
|||
* Linguist set |
|||
⚫ | |||
==Testing== |
|||
* Rules from phrase table |
|||
;Input: Les Carmelites el veneren com a sant patró seu. |
|||
Process for using GIZA++: |
|||
<pre> |
|||
^El<det><def><f><pl>/The<det><def><f><pl>$ |
|||
^*Carmelites/*Carmelites$ |
|||
^prpers<prn><pro><p3><m><sg>/prpers<prn><obj><p3><nt><sg>$ |
|||
^venerar<vblex><pri><p3><pl>/venerate<vblex><pri><p3><pl>$ |
|||
^com a<pr>/as a<pr>$ ^sant<adj><m><sg>/saint<adj><m><sg>$ |
|||
^patró<n><m><sg>/patron<n><sg>/owner<n><sg>/master<n><sg>/head<n><sg>/pattern<n><sg>/employer<n><sg>$ |
|||
^seu<adj><pos><m><sg>/his<adj><pos><m><sg>$^.<sent>/.<sent>$^.<sent>/.<sent>$ |
|||
</pre> |
|||
* Tag both sides of the corpus (europarl, en-ca, first 1,700,000 sentences) with the Apertium language pair. |
|||
;Reference: |
|||
* Extract the model/lex.f2e file. |
|||
* Take the top scoring analysis:analysis results where the POS matches |
|||
* Where the word is already ambiguous in the Apertium dictionaries, add the possibilities from GIZA to the dictionary so that they may be chosen -- only added with POS tag. |
|||
==Annotation process== |
|||
<pre> |
|||
235626 ]^El<det><def><f><pl>/The<det><def><f><pl>$ |
|||
^*Carmelites/*Carmelites$ |
|||
^prpers<prn><pro><p3><m><sg>/prpers<prn><obj><p3><nt><sg>$ |
|||
^venerar<vblex><pri><p3><pl>/venerate<vblex><pri><p3><pl>$ |
|||
^com a<pr>/as a<pr>$ |
|||
^sant<adj><m><sg>/saint<adj><m><sg>$ |
|||
^patró<n><m><sg>/patron<n><sg>$ |
|||
^seu<adj><pos><m><sg>/his<adj><pos><m><sg>$^.<sent>/.<sent>$[ |
|||
</pre> |
|||
# Translate corpus (native speaker of English, competent Catalan), adding missing translations to bilingual dictionary options. |
|||
;Test 1 (1/6) |
|||
#* Words with too many tagging errors, or MWE errors are left out. |
|||
# Proofread corpus |
|||
<pre> |
|||
# Run corpus up to lexical transfer stage |
|||
^patró<n><m><sg>/patron<n><sg>/owner<n><sg>/master<n><sg>/head<n><sg>/pattern<n><sg>/employer<n><sg>$ |
|||
# Annotate output of lexical transfer |
|||
</pre> |
|||
;Test 2 (1/1) |
|||
<pre> |
|||
^patró<n><m><sg>/patron<n><sg>$ |
|||
</pre> |
|||
;Test 3 (1/4) |
|||
<pre> |
|||
^patró<n><m><sg>/patron<n><sg>/owner<n><sg>/master<n><sg>/employer<n><sg>$ |
|||
</pre> |
Latest revision as of 08:11, 30 September 2011
Constraint-based lexical selection for rule-based machine translation
Corpus: cawiki-20110616-pages-articles.xml.bz2 cleaned with `aq-wikicrp' 1758582 lines 531983 unique analyses 531436 lines with >1 translation (30%) 2740 analyses with >1 translation 287 words (lemma+pos) with >1 translation in corpus 712 words in dictionary with >1 translation 1.03 fertility of dictionary over corpus (e.g. total number of word:word translations / total number of words)
Test corpus:
- 150 test words
- 1,500 sentences
- 10 per test word
- Randomly selected from the subset of sentences which were found in the corpus.
- Only words with >100 example sentences included
- Rationale: Dictionary doesn't provide good enough coverage to produce statistically significant results over a whole corpus.
- Discarding because of bad tagging/MWE recognition: 'to', 'sol', 'portada', 'cap', 'cop', 'marxa' (less than 60% correct)
Training corpus:
Baselines:
- TL Frequency-best
- TLM-best
- Linguist set
- Full analysis:Full analysis dic from Giza++
- Rules from phrase table
Process for using GIZA++:
- Tag both sides of the corpus (europarl, en-ca, first 1,700,000 sentences) with the Apertium language pair.
- Extract the model/lex.f2e file.
- Take the top scoring analysis:analysis results where the POS matches
- Where the word is already ambiguous in the Apertium dictionaries, add the possibilities from GIZA to the dictionary so that they may be chosen -- only added with POS tag.
Annotation process[edit]
- Translate corpus (native speaker of English, competent Catalan), adding missing translations to bilingual dictionary options.
- Words with too many tagging errors, or MWE errors are left out.
- Proofread corpus
- Run corpus up to lexical transfer stage
- Annotate output of lexical transfer