Calculating coverage

From Apertium
Revision as of 23:52, 25 March 2009 by Francis Tyers (talk | contribs)
Jump to navigation Jump to search

Notes on calculating coverage from wikipedia dumps (based on Asturian#Calculating coverage).

(Mac OS X `sed' doesn't allow \n in replacements, so I just use an actual (escaped) newline...)

wikicat.sh:

#!/bin/sh
# clean up wiki for running through apertium-destxt

# awk prints full lines, make sure each html elt has one
bzcat "$@" | sed 's/>/>\
/g' | sed 's/</\
</g' |\
# want only stuff between <text...> and </text>
awk '
/<text.*>/,/<\/text>/ { print $0 }
' |\
sed 's/\./ /g' |\
sed 's/\[\[[^|]*|//g' | sed 's/\]\]//g' | sed 's/\[\[//g' |\
# wiki markup, retain bar and fie from [[foo|bar]] [[fie]]
sed 's/&.*;/ /g' |\
# remove entities
sed 's/[;:?,]/ /g' |\
# and put space instead of punctuation
grep '^[ 	]*[A-ZÆØÅ]' # Your alphabet here
# Keep only lines starting with a capital letter, removing tables with style info etc.

count-tokenized.sh:

#!/bin/sh
# http://wiki.apertium.org/wiki/Asturian#Calculating_coverage

# Calculate the number of tokenised words in the corpus:
apertium-destxt | lt-proc $1 |apertium-retxt |\
# for some reason putting the newline in directly doesn't work, so two seds
sed 's/\$[^^]*\^/$^/g' | sed 's/\$\^/$\
^/g' 

To find all tokens from a wiki dump:

$ ./wikicat.sh nnwiki-20090119-pages-articles.xml.bz2 | ./count-correct.sh nn-nb.automorf.bin | wc -l

To find all tokens with at least one analysis (naïve coverage):

$ ./wikicat.sh nnwiki-20090119-pages-articles.xml.bz2 | ./count-correct.sh nn-nb.automorf.bin | grep -v '\/\*' | wc -l

To find the top unknown tokens:

$ ./wikicat.sh nnwiki-20090119-pages-articles.xml.bz2 | ./count-correct.sh nn-nb.automorf.bin | sed 's/[ 	]*//g' |\ # tab or space
   grep '\/\*' | sort -f | uniq -c | sort -gr | head