Building dictionaries

From Apertium
Revision as of 14:31, 19 May 2007 by Francis Tyers (talk | contribs) (New page: Hi Apertium users and developers: Some of you have been brave enough to start to write new language pairs for Apertium. That makes me (and all of the Apertium troop) very happy and thankf...)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Hi Apertium users and developers:

Some of you have been brave enough to start to write new language pairs for Apertium. That makes me (and all of the Apertium troop) very happy and thankful, but, most importantly, makes Apertium useful to more people.

This time I want to share some lessons I have learned after building some dictionaries: the importance of frequency estimates. For the new pairs to have the best possible coverage with a minimum of effort, it is very important to add words and rules in decreasing frequency, starting with the most frequent words and phenomena.

A person's intuition on which words are important of frequent can be very deceptive. Therefore, the best one can do is collect a lot of text (millions of words if possible) which is representative of what one wants to translate, and study the frequencies of words and phenomena. Get it from Wikipedia, or from newspaper, or write a robot that harvests it from the web.

It is quite easy to make a crude "hit parade" of words using a simple Unix command sequence (a single line)

$ cat mybigrepresentative.txt | tr ' ' '\012' | sort -f | uniq -c | sort -nr > hitparade.txt

[I took this from Unix for Poets I think]

Of course, this may be improved a lot but serves for illustration purposes.

You will find interesting properties in this list.

One is that multiplying the rank of a word by its frequency, you get a number which is pretty constant. That's called Zipf's Law.

The other one is that half of the list are "hapax legomena" (words that appear only once).

And third, with about 1000 words you may have 75% of the text covered.

So use lists like these when you are building dictionaries.

If one of your language is English, there are interesting lists:

But bear in mind that these lists are also based on a particular usage model of English, which is not "natural occurring" English.

The same applies for other linguistic phenomena. Linguists tend to focus on very infrequent phenomena which are key to the identity of a language, or on what is different between languages. But these "jewels" are usually not the "building blocks" you would use to build translation rules. So do not get carried away. Trust only frequencies and lots of real text...

Best,

Mikel

Wikipedia dumps can be had from:

For help in processing them see:

The dumps need cleaning up (removing Wiki syntax and XML etc.), but can provide a /substantial/ amount of text for both frequency analysis, and sentences for POS tagger training. It can take some work, and isn't as easy as getting a nice corpus, but on the other hand they're available in ~270 languages.

You'll want the one entitled "Articles, templates, image descriptions, and primary meta-pages. -- This contains current versions of article content, and is the archive most mirror sites will probably want."

Something like (for Afrikaans):

$ bzcat afwiki-20070508-pages-articles.xml.bz2 | grep '^[A-Z]' | sed
's/$/\n/g' | sed 's/\[\[.*|//g' | sed 's/\]\]//g' | sed 's/\[\[//g' |
sed 's/&.*;/ /g'

Will give you approximately useful lists of one sentence per line (stripping out most of the extraneous formatting).

Try something like (for Afrikaans):

$ bzcat afwiki-20070508-pages-articles.xml.bz2 | grep '^[A-Z]' | sed 's/$/\n/g' | 
sed 's/\[\[.*|//g' | sed 's/\]\]//g' | sed 's/\[\[//g' | sed 's/&.*;/ /g'  tr ' ' '\012' | 
sort -f | uniq -c | sort -nr > hitparade.txt

My examples I just hacked together... I'm sure you can come up with something better :)

Fran