Difference between revisions of "Multi-engine translation synthesiser"
Jump to navigation
Jump to search
(New page: The idea of this project is to take advantage of all possible resources in creating MT systems for marginalised languages. The general idea is to use the output of various MT systems to pr...) |
|||
Line 11: | Line 11: | ||
** Make a program that goes at the end of the pipeline that for n-gram phrases looks them up in the phrase table. |
** Make a program that goes at the end of the pipeline that for n-gram phrases looks them up in the phrase table. |
||
** If it finds a matching phrase, it scores both on a language model and chooses the highest probability. |
** If it finds a matching phrase, it scores both on a language model and chooses the highest probability. |
||
** This idea can be extended by incorporating user-feedback. For example a user "post-edits a phrase" and you can add these phrases to the phrase table at a given probability. |
|||
Issues: Speed — language models are slow. |
Issues: Speed — language models are slow. |
Revision as of 12:34, 29 March 2009
The idea of this project is to take advantage of all possible resources in creating MT systems for marginalised languages. The general idea is to use the output of various MT systems to produce one "better" translation. The "baseline" would be to use Apertium and Moses.
Ideas
- Statistical post-edition
This idea is kind of like the TMX support in Apertium, only it goes at the end of the pipeline.
- The first approximation would be to
- Take a parallel corpus, for e.g. Welsh--English, then run the Welsh side through Apertium to get English(MT)--English phrase table.
- Make a program that goes at the end of the pipeline that for n-gram phrases looks them up in the phrase table.
- If it finds a matching phrase, it scores both on a language model and chooses the highest probability.
- This idea can be extended by incorporating user-feedback. For example a user "post-edits a phrase" and you can add these phrases to the phrase table at a given probability.
Issues: Speed — language models are slow.