Apertium has moved from SourceForge to GitHub.
If you have any questions, please come and talk to us on #apertium on irc.freenode.net or contact the GitHub migration team.

Writing a scraper

From Apertium
(Difference between revisions)
Jump to: navigation, search
(Created page with 'This page outlines how to develop a scraper for apertium using our RFERL scraper. The code can be found in our subversion repository at [https://apertium.svn.sourceforge.net/svn…')
 
m
Line 1: Line 1:
This page outlines how to develop a scraper for apertium using our RFERL scraper. The code can be found in our subversion repository at [https://apertium.svn.sourceforge.net/svnroot/apertium/trunk/apertium-tools/scraper https://apertium.svn.sourceforge.net/svnroot/apertium/trunk/apertium-tools/scraper]
+
This page outlines how to develop a scraper for apertium using our RFERL scraper. The code can be found in our subversion repository at [https://apertium.svn.sourceforge.net/svnroot/apertium/trunk/apertium-tools/scraper https://apertium.svn.sourceforge.net/svnroot/apertium/trunk/apertium-tools/scraper].
   
 
== scrp-*.py ==
 
== scrp-*.py ==

Revision as of 21:46, 29 December 2012

This page outlines how to develop a scraper for apertium using our RFERL scraper. The code can be found in our subversion repository at https://apertium.svn.sourceforge.net/svnroot/apertium/trunk/apertium-tools/scraper.

scrp-*.py

The first thing you'll need is a script that gets the urls of a bunch of articles and titles. This script then loops through the news articles and titles and sends it to the Scraper class you'll make (below) to fill the corpus.

scrapers.py

You need to define a new class in scrapers.py that inherits the Scraper class.

Your new class will have two new functions:

  • url_to_aid():
This takes a url and converts it to a unique "article id". For sites that use some form of unique id for their articles (e.g., http://example.com/news?id=3141592653 or http://example.com/news/3141592653.html), you'll want to extract the id, probably with a simple regex. However, if this is for some reason not unique, or the site doesn't use unique ids, or if it's difficult to extract for some reason, it's okay to make a hash of the full url (which should be unique...). There are examples of both of these implemented in other scrapers in scrapers.py
  • scraped():
The first thing this function does is to fill self.doc with the contents of the page, by calling self.get_content(). This is all written for you already, so just call the function once and you're ready for the hard stuff.
The hard stuff consists of getting a cleaned, text-only version of just the article content from the page. You'll have to first make sure you know which element in the page is going to consistently contain just the article content, and then extract that out with lxml. You'll then want to take that element and clean it with lxml (since there are scripts and stuff that can be in there too that could get in the output), and then get the .text_content() of the element. An example of all this follows:
        self.get_content()
        cleaned = lxml.html.document_fromstring(lxml.html.clean.clean_html(lxml.html.tostring(self.doc.xpath("//div[@align='justify']")[0]).decode('utf-8')))
        cleaned = cleaned.text_content()
        return cleaned.strip()
Personal tools