Difference between revisions of "Ideas for Google Summer of Code/Appraise gisting"
(Created page with " == Set up gap-filling machine-translation-for-gisting evaluation on a recent version of Appraise== Many language pairs in Apertium are unique, such as Breton-French, and man...")
Revision as of 19:25, 4 February 2019
Set up gap-filling machine-translation-for-gisting evaluation on a recent version of Appraise
Many language pairs in Apertium are unique, such as Breton-French, and many of them are used for gisting (understanding) purposes. Gap-filling provides a simple way to evaluate the usefulness of machine translation for gisting. The implementation used in recent experiments is based on an outdated version of the Appraise platform. The code in http://github.com/mlforcada/Appraise, in turn forked from http://github.com/mlforcada/Appraise, the work of a GSoC student, contains an adaptation of an old (2014) version of http://github.com/cfedermann/Appraise to implement gap-filling evaluation as described in this WMT2018 paper. The objective is to bring the gap-filling functionality in  to be compatible with the latest versions of Appraise and to automate and improve scripts used to generate evaluation tasks.
- Get a hold of a GNU/Linux machine.
- Contact [User:mlforcada Mikel L. Forcada] to obtain the data cited in the paper.
- Make sure you can make evaluation work.
- Submit the results of an evaluation run to [User:mlforcada Mikel L. Forcada]
To complete the coding challenge you have to be in close contact with your mentor.