Difference between revisions of "Task ideas for Google Code-in/Add words to monolingual dictionary"
Jump to navigation
Jump to search
Firespeaker (talk | contribs) |
m (Updated command) |
||
Line 9: | Line 9: | ||
Paraphrased from Unhammer: |
Paraphrased from Unhammer: |
||
* analyse your corpus, make it one word per line, grab only the ones with * at the start, sort, count number of hits per word, sort again |
* analyse your corpus, make it one word per line, grab only the ones with * at the start, sort, count number of hits per word, sort again |
||
− | * e.g. <code>zcat corpus.txt.gz | apertium -d . ron-morph | tr ' ' '\n' | grep '^\*' | sort |uniq -d |sort -n > |
+ | * e.g. <code>zcat corpus.txt.gz | apertium -d . ron-morph | tr ' ' '\n' | grep '^\*' | grep -o '\*.*\^' | sort | uniq -d | sort -n > unknown.txt</code> |
<pre> |
<pre> |
||
<Unhammer> hitlist will be unknowns sorted by frequency, but you might have to |
<Unhammer> hitlist will be unknowns sorted by frequency, but you might have to |
Latest revision as of 19:44, 30 December 2019
- Select a language module, ideally such that the language is a language you know.
- Install Apertium locally from nightlies (instructions here); clone the relevant language module from GitHub; compile it; and check that it works. Alternatively, get Apertium VirtualBox and update, check out & compile the language pair.
- Using a large enough corpus of representative text in the language (e.g. plain text taken from Wikipedia, newspapers, literature, etc.) detect the 250 most frequent unknown words (words in the source document which are not in the dictionary). See below for information about how to do this. Note: the beginner version of this task only requires 100 words.
- Add the words to the monolingual dictionary (the appropriate
.dix
or.lexc
file) so that they are not unknown anymore. Make sure to categorise stems correctly (this can be hard, so please check with your mentor if you're unsure about anything!). - Compile and test again
- Submit a pull request to the GitHub repository with your updates.
How to find the most frequent unknowns[edit]
Paraphrased from Unhammer:
- analyse your corpus, make it one word per line, grab only the ones with * at the start, sort, count number of hits per word, sort again
- e.g.
zcat corpus.txt.gz | apertium -d . ron-morph | tr ' ' '\n' | grep '^\*' | grep -o '\*.*\^' | sort | uniq -d | sort -n > unknown.txt
<Unhammer> hitlist will be unknowns sorted by frequency, but you might have to skip a couple that are "strange" or difficult to add <Unhammer> and that's ok, as long as you start from the most frequent and work your way down