Difference between revisions of "Wikipedia Extractor"
Firespeaker (talk | contribs) |
Firespeaker (talk | contribs) m |
||
Line 8: | Line 8: | ||
This version is much simpler than the old version. This version auto-removes any formatting and only outputs the text to one file. To use it, simply use the following command in your terminal, where dump.xml is the Wikipedia dump |
This version is much simpler than the old version. This version auto-removes any formatting and only outputs the text to one file. To use it, simply use the following command in your terminal, where dump.xml is the Wikipedia dump |
||
$ python3 WikiExtractor.py --infn dump.xml.bz2 |
|||
This will run through all of the articles, get all of the text and put it in wiki.txt. This version also supports compression (BZip2 and Gzip), so you can use <code>dump.xml.bz2</code> or <code>dump.xml.gz</code> instead of <code>dump.xml.</code> You can also compress (Bzip2) the output file by adding <code>--compress</code> to the command. |
This will run through all of the articles, get all of the text and put it in wiki.txt. This version also supports compression (BZip2 and Gzip), so you can use <code>dump.xml.bz2</code> or <code>dump.xml.gz</code> instead of <code>dump.xml.</code> You can also compress (Bzip2) the output file by adding <code>--compress</code> to the command. |
Revision as of 03:03, 27 April 2017
Contents |
Goal
This tool extracts main text from xml Wikipedia dump files (at http://dumps.wikimedia.org/backup-index.html, ideally the "pages-articles" file), producing a text corpus, which is useful for training unsupervised part-of-speech taggers, n-gram language models, etc.
It was written by BenStobaugh during GCI-2013 and can be downloaded from SVN at https://svn.code.sf.net/p/apertium/svn/trunk/apertium-tools/WikiExtractor.py
This version is much simpler than the old version. This version auto-removes any formatting and only outputs the text to one file. To use it, simply use the following command in your terminal, where dump.xml is the Wikipedia dump
$ python3 WikiExtractor.py --infn dump.xml.bz2
This will run through all of the articles, get all of the text and put it in wiki.txt. This version also supports compression (BZip2 and Gzip), so you can use dump.xml.bz2
or dump.xml.gz
instead of dump.xml.
You can also compress (Bzip2) the output file by adding --compress
to the command.
You can also run python3 WikiExtractor.py --help
to get more details.