Shallow syntactic function labeller

From Apertium
Revision as of 11:15, 27 August 2017 by Deltamachine (talk | contribs)
Jump to navigation Jump to search

This is Google Summer of Code 2017 project

A repository for the whole project: https://github.com/deltamachine/shallow_syntactic_function_labeller

A workplan and progress notes can be found here: Shallow syntactic function labeller/Workplan

What was done

1. All needed data for North Sami, Kurmanji, Breton, Kazakh and English was prepared: there are two scripts, one of which creates datasets from UD treebanks (it is able to handle Kurmanji, Breton, Kazakh and English) and the second creates datasets from VISL treebanks (is able to handle North Sami).

2. Simple RNN, which is able to label sentences, was built. It works with fastText embeddings for every tag which was seen in the corpus: an embedding for a word is just a sum of all word's tags embeddings.

3. The testpack for two language pairs was built: it contains all needed data for sme-nob and kmr-eng, the labeller itself and installation script.

List of commits

All commits are listed below:

https://github.com/deltamachine/shallow_syntactic_function_labeller/commits/master

Description

The shallow syntactic function labeller takes a string in Apertium stream format, parses it into a sequence of morphological tags and gives it to a classifier. The classifier is a simple RNN model trained on prepared datasets which were made from parsed syntax-labelled corpora (mostly UD-treebanks). The classifier analyzes the given sequence of morphological tags, gives a sequence of labels as an output and the labeller applies these labels to the original string.

Labeller in the pipeline

The labeller runs between morphological analyzer or disambiguator and pretransfer.

For example, in sme-nob it runs between sme-nob-disam and sme-nob-pretransfer, like an original syntax module.

... | cg-proc 'sme-nob.mor.rlx.bin' | python 'sme-nob-labeller.py' | apertium-pretransfer | lt-proc -b 'sme-nob.autobil.bin' | ...

Language pairs support

Currently the labeller works with following language pairs:

  • sme-nob: the labeller may fully replace the original syntax module (it doesn't have all the functionality of the original CG, but works pretty good anyway)
  • kmr-eng: may be tested in the pipeline, but the pair has only a few rules that look at syntax labels

Also there is all the needed data for Breton, Kazakh and English (https://github.com/deltamachine/shallow_syntactic_function_labeller/tree/master/models), but at this moment br-fr, kk-tat and en-ca just don't have syntax rules, so we can not test the labeller.

Labelling performance

The results of validating the labeller on the test set (accuracy = mean accuracy score on the test set).

Language Accuracy
North Sami 81,6%
Kurmanji 84%
Breton 79,7%
Kazakh 82,6%
English 79,8%

Installation

Prerequisites

1. Python libraries:

2. Precompiled language pairs which support the labeller (sme-nob, kmr-eng)

How to install a testpack

NB: currently the testpack contains syntax modules only for sme-nob and kmr-eng.

git clone https://github.com/deltamachine/sfl_testpack.git
cd sfl_testpack

Script setup.py adds all the needed files in language pair directory and changes all files with modes.

Arguments:

  • work_mode: -lb for installing the labeller and changing modes, -cg for backwarding changes and using the original syntax module (sme-nob.syn.rlx.bin or kmr-eng.prob) in the pipeline.
  • lang: -sme for installing/uninstalling the labeller only for sme-nob, -kmr - only for kmr-eng, -all - for both.

For example, this script will install the labeller and add it to the pipeline for both pairs:

python setup.py -lb -all

And this script will backward modes changes for sme-nob:

python setup.py -cg -sme

Bugs

  • Installation script changes eng-kmr pipeline along with kmr-eng
  • Words-without-a-label bug
<spectre> is it possible that some words don't get a label ?
<spectre> $ echo "Barzanî di peyama xwe de behsa mijarên girîng û kirîtîk kir." | apertium -d . kmr-eng-tagger
<spectre> ^Barzanî<np><ant><m><sg><obl><@dobj>$ ^di<pr><@case>$ ^peyam<n><f><sg><con><def><@nmod>$ ^xwe<prn><ref><mf><sp><@nmod:poss>$ 
^de<post><@case>$ ^behs<n><f><sg><con><def>$ ^mijar<n><f><pl><con><def><@nmod:poss>$ ^girîng<adj><@amod>$ ^û<cnjcoo><@cc>$ ^*kirîtîk$
^kirin<vblex><tv><past><p3><sg>$^..<sent><@punct>$

To do

  • Do more tests. MORE.
  • Refactore the main code.
  • Continue improving the perfomance of the models.