Difference between revisions of "Ideas for Google Summer of Code/User-friendly lexical selection training"

From Apertium
Jump to navigation Jump to search
(the yaml specification allows embedding javascript and dog knows what, horrible)
Line 14: Line 14:
 
==Tasks==
 
==Tasks==
   
* Create a simple config format (e.g. yaml-based) that includes all necessary information for the training process
+
* Create a simple config format (e.g. [https://github.com/toml-lang/toml toml]-based) that includes all necessary information for the training process
 
* Create a driver script that will:
 
* Create a driver script that will:
 
** validate configuration
 
** validate configuration

Revision as of 12:38, 21 February 2018

Our bilingual dictionaries allow for ambiguous translations, selecting the right one in a context is handled by our Lexical selection module apertium-lex-tools. We can either write rules manually, or word-align and train on a corpus to infer rules automatically. Unfortunately, the procedure for training is a bit messy, with various scripts involved that require lots of manual tweaking, and many third party tools to be installed, e.g. irstlm, moses, gizapp.

The goal of this task is to make the training procedure as streamlined and user-friendly as possible. Ideally, we should only have to write a simple config file, and a driver script would take care of the rest. There should also be regression tests on the driver script, to ensure it works in the face of updates to third-party tools.

For some documentation, see Lexical selection and onwards:

To get a feel for how lexical selection is used in translation, read How to get started with lexical selection rules, although that is more aimed at the language pair developer writing rules manually.

Tasks

  • Create a simple config format (e.g. toml-based) that includes all necessary information for the training process
  • Create a driver script that will:
    • validate configuration
    • ensure third party tools are downloaded, configured and built
    • preprocess corpora
    • run training
    • finally produce an .lrx file
    • and preferably allow for evaluation of the .lrx file on a held-out test corpus
    • do this for both parallel corpora (with giza) and non-parallel corpora (just irstlm)
  • Create regression tests for driver script
  • Dog-food the work:
    • run the training on language pairs that don't have (many) lexical selection rules
    • check if it improves quality (using parallell corpora)
    • if it does, add rules to the pair (in cooperation with pair maintainers)

Coding challenges

  • Make a simple program to read a config file, check that it's valid and output some values from the config
  • Word-align a bilingual corpus with moses+giza
  • Run lexical selection training for a language pair

Frequently asked questions

  • none yet, ask us something! :)

See also