Incorporating guessing into Apertium

From Apertium
Jump to navigation Jump to search

Apertium has a coverage problem.

The greater the coverage of the real dictionaries, the more accurate guessing will be. So we wouldn't want to try with a pair that has 80% coverage, but we would with 95% coverage.

Neural machine translation systems get around this by doing sub-word segmentation. But Apertium can't effectively use this because of the linguistic model.

However, we could incorporate guessing into the platform, here are some ideas.

In an RBMT translation system, guessing needs to take place in three places:

  • Morphological analysis
  • Bilingual transfer
  • Morphological generation

For morphological analysis, guessers can be fairly effectively implemented or trained. They could be based on regex, and some pairs do that.

Or one could also envisage using an existing analyser + corpus to train the guesser. e.g. you start by partitioning the corpus into two, and then try iteratively training the guesser, first you do it with only 10% of the vocabulary in the existing analyser, then 20% then 30% etc. By the time you finish you should have a reasonable model of missing unknown words.

For the bilingual transfer things are more difficult, but one could imagine using techniques such as those used by Artetxe et al. to make a translation guesser using the existing bidix and two monolingual corpora in a similar way.

Morphological generation for the regular part of the paradigm is largely a solved problem and could be implemented fairly easily.

Rule component

Morphological rules might look something like,

     <match tags="np.ant"/> 
     <match case="Aa" unknown="true"><add-reading tags="np.ant"/></match>
     <match tags="np.cog"/>

     <match tags=""/> 
     <match tags="pr"><add-reading tags=""/></match>
     <match tags=""/>

     <match tags="quot"/> 
     <match case="Aa"><add-reading tags=""/></match>
     <match tags="quot"/>

     <match ends-with="ista" tags="*"><add-reading tags=""/></match>

Guesser for orthographic variation

Here is an idea for dealing with unknown words caused by spelling mistakes or orthographic variation:


  • Word and character embeddings
  • +1, -1 context


  • Analyses for an unknown word (based on an existing analysis string)


  • Take a corpus that has variation in, and try and


  • Sometimes we'll want to leave a word unknown


  • Will we ever want to add an analysis to an existing word?

Another guesser for orthographic variation

Let's say that we already have some orthographic variation in the dictionary, we can make a training set of e.g.

$ lt-expand apertium-scn.scn.dix  | tee /tmp/analyses | cut -f1 -d':' > /tmp/surface

cat /tmp/analyses | sed 's/:[<>]:/:/g' | cut -f2 -d':' | sed 's/.*/^&$/g' | lt-proc -d scn.autogen.bin  > /tmp/surface.2

$ paste /tmp/surface /tmp/surface.2 | grep -v '[~#]'

splicitazzioni	splicitazzioni
splicitazzioni	splicitazzioni
splicitazioni	splicitazzioni
splicitazione	splicitazzioni
splicitazziuni	splicitazzioni
splicitaziuni	splicitazzioni
splicitazzioni	splicitazzioni
papulazzioni	papulazzioni
papulazzioni	papulazzioni
papulazioni	papulazzioni
papulazione	papulazzioni
papulazziuni	papulazzioni
papulaziuni	papulazzioni


We could run something like this in the pipe before lt-proc, and then allow lt-proc to look up the analyses of the various forms and take the union,

echo "papulazione" | apertium-variation -b 3 variation.bin 

Individually these might get:
papulazione - *papulazione
papulazzioni - papulazzioni<n><f><sp>
papulazioni - papulazzioni<n><f><sp>

So the output would be:

echo "papulazzioni" | apertium-variation -b 3 variation.bin  | lt-proc scn.automorf.bin