Shallow syntactic function labeller

From Apertium
Jump to navigation Jump to search

This is Google Summer of Code 2017 project

A repository for the whole project:

A workplan and progress notes can be found here: Shallow syntactic function labeller/Workplan

What was done[edit]

1. All needed data for North Sami, Kurmanji, Breton, Kazakh and English was prepared: there are two scripts, one of which creates datasets from UD treebanks (it is able to handle Kurmanji, Breton, Kazakh and English) and the second creates datasets from VISL treebanks (is able to handle North Sami).

2. Simple RNN, which is able to label sentences, was built. It works with fastText embeddings for every tag which was seen in the corpus: an embedding for a word is just a sum of all word's tags embeddings.

3. The labeller itself was created. Also the testpack for two language pairs was built: it contains all needed data for sme-nob and kmr-eng, the labeller and installation script.

List of commits[edit]

All commits are listed below:


The shallow syntactic function labeller takes a string in Apertium stream format, parses it into a sequence of morphological tags and gives it to a classifier. The classifier is a simple RNN model trained on prepared datasets which were made from parsed syntax-labelled corpora (mostly UD-treebanks). The classifier analyzes the given sequence of morphological tags, gives a sequence of labels as an output and the labeller applies these labels to the original string.

Labeller in the pipeline[edit]

The labeller runs between morphological analyzer or disambiguator and pretransfer.

For example, in sme-nob it runs between sme-nob-disam and sme-nob-pretransfer, like an original syntax module.

... | cg-proc 'sme-nob.mor.rlx.bin' | python '' | apertium-pretransfer | lt-proc -b 'sme-nob.autobil.bin' | ...

Language pairs support[edit]

Currently the labeller works with following language pairs:

  • sme-nob: the labeller may fully replace the original syntax module (it doesn't have all the functionality of the original CG, but works pretty good anyway)
  • kmr-eng: may be tested in the pipeline, but the pair has only a few rules that look at syntax labels

Also there is all the needed data for Breton, Kazakh and English (, but at this moment br-fr, kk-tat and en-ca just don't have syntax rules, so we can not test the labeller.

Labelling performance[edit]

The results of validating the labeller on the test set (accuracy = mean accuracy score on the test set).

Language Accuracy
North Sami 81,6%
Kurmanji 84%
Breton 79,7%
Kazakh 82,6%
English 79,8%



1. Python libraries:

2. Precompiled language pairs which support the labeller (sme-nob, kmr-eng)

How to install a testpack[edit]

NB: currently the testpack contains syntax modules only for sme-nob and kmr-eng.

git clone
cd sfl_testpack

Script adds all the needed files in language pair directory and changes all files with modes.


  • work_mode: -lb for installing the labeller and changing modes, -cg for backwarding changes and using the original syntax module (sme-nob.syn.rlx.bin or kmr-eng.prob) in the pipeline.
  • lang: -sme for installing/uninstalling the labeller only for sme-nob, -kmr - only for kmr-eng, -all - for both.

For example, this script will install the labeller and add it to the pipeline for both pairs:

python -lb -all

And this script will backward modes changes for sme-nob:

python -cg -sme


1. Installation script changes eng-kmr pipeline along with kmr-eng

2. Problems with tags order (syntactic label is not the last tag)

3. Words-without-a-label bug

<spectre> is it possible that some words don't get a label ?
<spectre> $ echo "Barzanî di peyama xwe de behsa mijarên girîng û kirîtîk kir." | apertium -d . kmr-eng-tagger
<spectre> ^Barzanî<np><ant><m><sg><obl><@dobj>$ ^di<pr><@case>$ ^peyam<n><f><sg><con><def><@nmod>$ ^xwe<prn><ref><mf><sp><@nmod:poss>$ 
^de<post><@case>$ ^behs<n><f><sg><con><def>$ ^mijar<n><f><pl><con><def><@nmod:poss>$ ^girîng<adj><@amod>$ ^û<cnjcoo><@cc>$ ^*kirîtîk$

2 and 3 seem to be fixed, but it should be checked carefully.

To do[edit]

  • Do more tests. MORE.
  • Fix bugs
  • Refactore the main code.
  • Continue improving the perfomance of the models.