Ideas for Google Summer of Code/Add weights to lttoolbox

From Apertium
Jump to navigation Jump to search

This line is purposedly empty. I want begiak to repeat the next paragraph.

lttoolbox is a set of tools for building finite-state transducers. It would be good to add possibility for weighting lexemes and analyses.

Weights in this page are intuitively defined as larger is worse and two weights can be added using + to make bigger weight reasonably. Theoretically they can be logprobs or tropical semiring stuffs or whatever.

Apertium pipeline vs. weights

The apertium pipeline is like so:

  1. morphological analysis
    1. POS tagging
    2. disambiguation
  2. lexical transfer
    1. lexical selection
  3. structural transfer
  4. morphological generation

For most of the tasks the weights are well understood and clear cut:

  1. WFST morphological analysers (surface unigrams, compound n-grams, morph n-grams, lemma weights, pos tag sequence weights)
    1. N-gram FSTs (OpenGRM etc.)
    2. CG with weights!
  2. IBM model 1, apertium's current lexical selection module, etc.
  3. Something from SMT, simple weights for stuffs?
    1. there's whole separate gsoc project for this
  4. WFST morph. analyser is generator inversed

Weights can / should be scaled and combined. The formulas for combining them can be learnt from gold corpora unsupervisedly.

Syntaxes for weights in dixes and t?xes and all

Task should include writing some formats for weights in dictionaries...

Writing weights for lemmas:

  <section id="main" type="standard">
    <e lm="foo" w="1"><i>foo</i><par n="foo__n"/></e>
    <e lm="foo" w="2"><i>foo</i><par n="bar__adj"/></e>
    <e lm="foo" w="7"><i>foo</i><par n="baz__adv"/></e>
  </section>

Same stuff for other parts...?

  <pardef n="foo__n">
    <e w="1"><p><l/><r><s n="sg"/></e>
    <e w="4"><p><l>s</l><r><s n="pl"/></e>
  </pardef>

Pipeline outputs with weights

Morph. Analysis with weights should output ordered list (with weights):

^foo/foo<n>/foo<adj>/foo<adv>$

Acquiring weights

There're couple of ways to easily weight:

  • gold-standard tagged corpora and
  • untagged corpora

can be sort | uniq -c'd and composed to analysis and surface sides of the monodix automaton respectively. And:

  • rules can tell weights to stuffs: <sg><ins> += 123, <sg><gen> += 0.1, etc.

Notes with weights

  • HFST supports weighted automata out of the box already (and OpenFst and pyfst and libhfst-python are all good candidates for prototyping)
  • Weights should ideally be supported throughout the pipeline, analysis, disambiguation, lexical transfer/selection, translation, chunking can all use it reasonably
  • For the lexical transfer the weights could basically be model 1 onto lttoolbox.
  • compare how SMT's do probabilities

Related features

A .dix in lttoolbox can have several "sections", each of these is a separate (non-merged) fst in the compiled binary.

We would also like to have weights on whole sections – this is easier to implement than arc-weights. If you have three sections, where a and b have weight 1 and c has weight 3, you would first try to analyse the word with a ∪ b and if it's known, you output that analysis (ignoring c), but if it's still unknown, you analyse with c. This way, we can have e.g. heuristics in section c, that don't pollute the output in the "good" known analyses.

Tasks

  • syntaxes for weights
  • specs for weight annotations in pipeline
  • tools for acquiring / converting weights
  • tuning weight combinations
  • Implement weighted arcs in lttoolbox (C++) and integrate into Apertium.
  • Implement weighted sections in lttoolbox
  • Implement recursive paradigms in lttoolbox

Coding challenge

Further reading