Difference between revisions of "Unigram tagger"

From Apertium
Jump to navigation Jump to search
Line 68: Line 68:
 
Passed the lexical unit <code>^a/a&lt;a&gt;/a&lt;b&gt;/a&lt;c&gt;$</code>, the tagger assigns the analysis string <code>a&lt;a&gt;</code> a score of
 
Passed the lexical unit <code>^a/a&lt;a&gt;/a&lt;b&gt;/a&lt;c&gt;$</code>, the tagger assigns the analysis string <code>a&lt;a&gt;</code> a score of
   
 
Let TokenCountAnalysisString be the frequency of the analysis string "a<a>" in the corpus.
 
 
<math>
 
<math>
\textrm{Let TokenCountAnalysisString be the frequency of the analysis string "a<a>" in the corpus.}
 
 
TokenCountAnalysisString + 1 = (1) + 1 = 2</math> and <code>a&lt;b&gt;</code> a score of <math>(2) + 1 = 3</math>. The unknown analysis string <code>a&lt;c&gt;</code> is assigned a score of <math>1</math>.
 
TokenCountAnalysisString + 1 = (1) + 1 = 2</math> and <code>a&lt;b&gt;</code> a score of <math>(2) + 1 = 3</math>. The unknown analysis string <code>a&lt;c&gt;</code> is assigned a score of <math>1</math>.
 
If reconfigured with <code>--enable-debug</code>, the tagger prints such calculations to stderr.
 
If reconfigured with <code>--enable-debug</code>, the tagger prints such calculations to stderr.

Revision as of 23:03, 14 January 2016

apertium-tagger from m5w/apertium supports all the unigram models from A set of open-source tools for Turkish natural language processing.

Install

First, install all prerequisites. See Installation#If you want to add language data / do more advanced stuff. Then, replace <directory> with the directory you'd like to clone m5w/apertium into and clone the repository.

git clone https://github.com/m5w/apertium.git <directory>

Then, see Minimal installation from SVN#Set up environment. Finally, configure, build, and install m5w/apertium. See Minimal installation from SVN#Configure, build, and install.

Usage

See apertium-tagger -h.

Train a Model on a Hand-Tagged Corpus

First, get a hand-tagged corpus as one would for all other models.

$ cat handtagged.txt
^a/a<a>$
^a/a<b>$
^a/a<b>$
^aa/a<a>+a<a>$
^aa/a<a>+a<b>$
^aa/a<a>+a<b>$
^aa/a<b>+a<a>$
^aa/a<b>+a<a>$
^aa/a<b>+a<a>$
^aa/a<b>+a<b>$
^aa/a<b>+a<b>$
^aa/a<b>+a<b>$
^aa/a<b>+a<b>$

Example 1: a Hand-Tagged Corpus for apertium-tagger

Then, replace MODEL with the unigram model from "A set of open-source tools for Turkish natural language processing" you'd like to use, replace SERIALISED_BASIC_TAGGER with the filename to which you'd like to write the model, and train the tagger.

$ apertium-tagger -s 0 -u MODEL SERIALISED_BASIC_TAGGER handtagged.txt

Disambiguate

Either write input to a file or pipe it.

$ cat raw.txt
^a/a<a>/a<b>/a<c>$
^aa/a<a>+a<a>/a<a>+a<b>/a<b>+a<a>/a<b>+a<b>/a<a>+a<c>/a<c>+a<a>/a<c>+a<c>$

Example 2: Input for apertium-tagger

Replace MODEL with the unigram model from "A set of open-source tools for Turkish natural language processing" you'd like to use, replace SERIALISED_BASIC_TAGGER with the file to which you wrote the unigram model, and disambiguate the input.

$ apertium-tagger -gu MODEL SERIALISED_BASIC_TAGGER raw.txt
^a/a<b>$
^aa/a<b>+a<b>$
$ echo '^a/a<a>/a<b>/a<c>$
^aa/a<a>+a<a>/a<a>+a<b>/a<b>+a<a>/a<b>+a<b>/a<a>+a<c>/a<c>+a<a>/a<c>+a<c>$' | \
apertium-tagger -gu MODEL SERIALISED_BASIC_TAGGER
^a/a<b>$
^aa/a<b>+a<b>$

Unigram Models

See section 5.3 of "A set of open-source tools for Turkish natural language processing."

Model 1

See section 5.3.1 of "A set of open-source tools for Turkish natural language processing." This model scores each analysis string in proportion to its frequency with add-one smoothing. Consider the following corpus.

^a/a<a>$
^a/a<b>$
^a/a<b>$

Passed the lexical unit ^a/a<a>/a<b>/a<c>$, the tagger assigns the analysis string a<a> a score of

Let TokenCountAnalysisString be the frequency of the analysis string "a<a>" in the corpus. and a<b> a score of . The unknown analysis string a<c> is assigned a score of . If reconfigured with --enable-debug, the tagger prints such calculations to stderr.



score("a<a>") ==
  2 ==
  2.000000000000000000
score("a<b>") ==
  3 ==
  3.000000000000000000
score("a<c>") ==
  1 ==
  1.000000000000000000
^a<b>$

Training on Corpora with Ambiguous Lexical Units

Consider the following corpus.

$ cat handtagged.txt
^a/a<a>$
^a/a<a>/a<b>$
^a/a<b>$
^a/a<b>$

The probabilities of a<a> and a<b> and both half for the first lexical unit. However, all unigram models store frequencies as std::size_t's, integral types.

To account for this, the tagger stores the LCM of all lexical unit's sizes. A lexical unit's size is the size of its analysis vector. It initializes this value to 1, expecting unambiguous lexical units. The size of this corpus' first lexical unit, ^a/a<a>$, 1, is divisible by the LCM, 1, so the tagger increments the frequency of its analysis, a<a> by LCM / size = (1) / (1) = 1.

The size of the next lexical unit, ^a/a<a>/a<b>$, 2, isn't divisible by the LCM, 1, so the tagger multiplies both the LCM and all previous analysis frequencies by it. The tagger multiplies the frequency of a<a> to yield 2. Then, the tagger increments the frequency of each of this lexical unit's analyses, a<a> and a<b>, by LCM / size = (2) / (2) = 1. The frequency of a<a> is then 3, the frequency of a<b>, 1.

The tagger increments the frequency of the next lexical unit's analysis, a<b> by LCM / size = (2) / (1) = 2. After doing this again for the last lexical unit, the frequency of a<a> is 3, the frequency of a<b>, 5.

Each model implements functions to increment analyses and multiply previous ones, so this method works for models 2 and 3 as well.

TODO: If one passes the -d option to apertium-tagger, the tagger prints warnings about ambiguous analyses in corpora to stderr.

$ apertium-tagger -ds 0 -u 1 handtagged.txt
apertium-tagger: handtagged.txt: 2:13: unexpected analysis "a<b>" following anal
ysis "a<a>"
^a/a<a>/a<b>$
            ^

Model 2

See section 5.3.2 of "A set of open-source tools for Turkish natural language processing."

Consider the same corpus from Unigram tagger#Model 1.

The tag string <b> is twice as frequent as <a>. However, model 1 scores b<a> and b<b> equally because neither analysis appears in the corpus.

This model splits each analysis string into a root, r, and the part of the analysis string that isn't the root, a. An analysis string's root is its first lemma. For s<t>, r is s, and a is <t>, for s<t>+u<v>, r, s, a, <t>+u<v>. The tagger scores each analysis string in proportion to the product of the probability of r given a with additive smoothing and the frequency of a. This model scores unknown analysis strings with frequent tag strings higher than unknown analysis strings with infrequent tag strings.

Passed the lexical unit ^b/b<a>/b<b>$, the tagger assigns the analysis string b<a> a score of

Let TokenCountAnalysisString be the frequency of the analysis string "a<a>" in
  the corpus.
Let TokenCountTagString be the frequency of the tag string "<a>" in the corpus.
Let TypeCountTagString be the number of unique analysis strings with the tag
  string "<a>".
score = (TokenCountAnalysisString + 1) /
    (TokenCountTagString + 1 + TypeCountTagString) * (TokenCountTagString + 1) =
  [(0) + 1] / [(1) + 1 + (2)] * [(1) + 1] =
  1 / 4 * 2 =
  1 / 2

. Note that TypeCountTagString includes the scored analysis string's r. For a known analysis string, a<a>, TypeCountTagString would be 1.

The tagger assigns the tag string b<b> a score of [(0) + 1] / [(2) + 1 + (2)] * [(2) + 1] = 1 / 5 * 3 = 3 / 5.