Difference between revisions of "Writing a scraper"

From Apertium
Jump to navigation Jump to search
 
(33 intermediate revisions by 4 users not shown)
Line 1: Line 1:
This page outlines how to develop a scraper for apertium using our RFERL scraper. The code can be found in our subversion repository at [https://apertium.svn.sourceforge.net/svnroot/apertium/trunk/apertium-tools/scraper https://apertium.svn.sourceforge.net/svnroot/apertium/trunk/apertium-tools/scraper].
+
This page outlines how to develop a scraper of news websites for Apertium using our '''news scraper'''. The code can be found on GitHub at [https://github.com/apertium/apertium-news-scrapers https://github.com/apertium/apertium-news-scrapers].
   
== scrp-*.py ==
+
== How-To ==
   
  +
===Get to know the website===
The first thing you'll need is a script that gets the urls of a bunch of articles and titles. This script then loops through the news articles and titles and sends it to the Scraper class you'll make (below) to fill the corpus. Have a look at the various <code>scrp-*.py</code> scripts currently available to get a feel for how they work.
 
  +
# Visit the website which you plan to scrape and locate the archive section which usually offers an interface to select a given day and see a list of links to articles published on that day.
  +
#* If you can't understand the language the website is written in, ask for help in IRC or use a translator and look for a section marked "Archive". If you're unable to locate an archive, find the sitemap and use it as a starting point.
  +
#* Sometimes you'll be able to locate a calendar that links to a page with articles from each date, often an optimal situation.
  +
# Familiarize yourself with the structure of the URL and how manipulating it will yield a different set of articles to scrape.
  +
#* Try to only scrape pages that will be useful. For example, scraping a picture gallery will yield few words so concentrate on scraping the more densely packed articles such as the news.
  +
#* The URL will sometimes contain a date which can be manipulated to yield all the articles published on a certain day. (e.g. http://example.com/archive/news/20121104.html - "20121104" indicates that this URL will show a list of articles published on 04/11/2012)
  +
#* Other common configurations include having a sequential number which marks pages of articles chronologically. For example, the latest articles have a URL containing "1" and older ones "2", etc. (e.g. http://example.com/archive/news/343.html - "343" indicates that this will show the 343rd page of the list of news articles)
  +
#* Devise a URL template that you can use string substitutions on to construct a list of URLs to lists of article links.
   
  +
===Get a list of articles===
== scrapers.py ==
 
  +
<ol>
You need to define a new class in scrapers.py that inherits the Scraper class.
 
  +
<li>Write a driver script named <code>scrp-*.py</code> which will given a certain range of dates (or other parameters depending on the site's structure, e.g. how many pages of articles to scrape if there is no calendar support), be able to generate, for example, a list of tuples containing the article's link, its title and its publication date.</li>
  +
<ul>
  +
<li>[http://lxml.de/ LXML] and [http://www.crummy.com/software/BeautifulSoup/ BeautifulSoup] are two useful tools for scraping HTML.</li>
  +
<li>Use Chrome/Firefox's Developer Console with Inspect Element to find distinguishing characteristics for each article link element. For example, each article link could be wrapped in a <code>div</code> with <code>.articleLink</code> (it's not always that obvious).</li>
  +
<li>Using LXML offers many choices when extracting the article info from the page, from picking specific CSS classes to arbitrary XPath expressions.</li>
  +
<li>If you find that selecting all the article info requires a more complex CSS selector, use a [http://css2xpath.appspot.com/ CSS to XPath converter]. For example, consider a situation where the link tag to each article has the <code>articleLink</code> class. A possible CSS selector for this would be <code>.articleLink</code> which would become <code>descendant-or-self::*[contains(concat(' ', normalize-space(@class), ' '), ' articleLink ')]</code> if converted into an XPath expression. An example using LXML with each expression is demonstrated below</li>
  +
<code><pre>
  +
rawArticlesHtml = getPage(conn, url, rawContent = True)
  +
articlesHtml = lxml.html.fromstring(rawArticlesHtml)
  +
articleTags = articlesHtml.xpath("descendant-or-self::*[contains(concat(' ', normalize-space(@class), ' '), ' articleLink ')]") #XPath Method
  +
articleTags = articlesHtml.find_class("articleLink") #CSS Selector Method
  +
</pre></code>
  +
<li>As you populate the article list, writing the list to a file is useful for debugging. Outputting it to the console could fail due to character encoding issues. Look below for a useful helper method that does either depending on the <code>display</code> parameter.
  +
<code><pre>
  +
def printArticles(articlesData, fileName, display=False):
  +
if display:
  +
for (title, url, date) in articlesData:
  +
print(title, url, date.isoformat())
  +
else:
  +
with open(fileName, 'a', encoding='utf-8') as file:
  +
for (title, url, date) in articlesData:
  +
file.write("%s, %s, %s\n" % (title, url, date.isoformat()))
  +
</pre></code>
  +
</li>
  +
<li>Don't worry too much about accidentally populating the article list with duplicate URLs, the scraper is designed to ignore duplicate articles (as long as you implement the <code>url_to_aid()</code> function correctly later).
  +
</li>
  +
</ul>
  +
</ol>
   
  +
===Add a <code>Scraper</code> class===
Your new class will have two new functions:
 
  +
<ol>
  +
<li>Add an entry to the <code>feed_sites</code> dictionary in <code>scraper_classes.py</code> which maps from the name of the website to a unique Scraper class.</li>
  +
<li>Define a new class in <code>scrapers.py</code> that inherits the Scraper class with two functions: <code>url_to_aid()</code> and <code>scraped()</code> with very important specifications.</li>
  +
<ol>
  +
<li><code>url_to_aid()</code>: This function will take as an input and convert it to a unique "article id" (aid).</li>
   
  +
<ul>
* <code>url_to_aid()</code>:
 
:: This takes a url and converts it to a unique "article id". For sites that use some form of unique id for their articles (e.g., http://example.com/news?id=3141592653 or http://example.com/news/3141592653.html), you'll want to extract the id, probably with a simple regex. However, if this is for some reason not unique, or the site doesn't use unique ids, or if it's difficult to extract for some reason, it's okay to make a hash of the full url (which should be unique...). There are examples of both of these implemented in other scrapers in scrapers.py
+
<li>Many sites will use a unique ID inside their article URLs (e.g., http://example.com/news?id=3141592653 or http://example.com/news/3141592653.html), these are fairly simple to extract using a regex or string splitting.</li>
  +
<li>However, if this is for some reason not unique, or the site doesn't use unique ids, or if it's difficult to extract for some reason, it's okay to make a hash of the full url (which should be unique...).</li>
  +
<li>There are examples of both of these methods implemented in other scrapers in <code>scrapers.py</code>. Take a look if you get stuck.</li>
  +
</ul>
  +
<li><code>scraped()</code>: This function will take as "input" the HTML contents of the article and output a cleaned version of the article's text for inclusion in the XML corpus.</li>
   
  +
<ul>
* <code>scraped()</code>:
 
:: The first thing this function does is to fill <code>self.doc</code> with the contents of the page, by calling <code>self.get_content()</code>. This is all written for you already, so just call the function once and you're ready for the hard stuff.
+
<li>First, fill <code>self.doc</code> with the contents of the page, by calling <code>self.get_content()</code>. This is all written for you already, so just call the function once and you're ready for the hard stuff.</li>
  +
<li>Now, LXML/BeautifulSoup will be very useful for scraping the actual article content from the HTML of the entire page.</li>
:: The hard stuff consists of getting a cleaned, text-only version of just the article content from the page. You'll have to first make sure you know which element in the page is going to consistently contain just the article content, and then extract that out with lxml. You'll then want to take that element and clean it with lxml (since there are scripts and stuff that can be in there too that could get in the output), and then get the <code>.text_content()</code> of the element. An example of all this follows:
 
  +
<li>Most likely, the article text will be wrapped in some sort of an identifiable container, so follow a similar procedure to that which proved useful when populating the list of articles, and identify this element.</li>
<pre>
 
  +
<li>Take the element which contains the article content, extract it from the HTML, and then clean it with LXML (to remove scripts, etc. which shouldn't be in the corpus).</li>
self.get_content()
 
  +
<li>The cleaning procedure below often suffices to remove all the HTML tags, changing break tags and paragraph tags into line breaks as necessary.</li>
cleaned = lxml.html.document_fromstring(lxml.html.clean.clean_html(lxml.html.tostring(self.doc.xpath("//div[@align='justify']")[0]).decode('utf-8')))
 
  +
<code><pre>
cleaned = cleaned.text_content()
 
  +
self.get_content()
return cleaned.strip()
 
  +
cleaned = lxml.html.document_fromstring(lxml.html.clean.clean_html(lxml.html.tostring(self.doc.xpath("//div[@align='justify']")[0]).decode('utf-8')))
</pre>
 
  +
cleaned = cleaned.text_content()
  +
return cleaned.strip()
  +
</pre></code>
  +
<li>Sometimes, this won't suffice and you'll have to be able to identify the offending elements and remove them manually from the HTML before invoking LXML's <code>clean</code>.</li>
  +
</ul>
  +
</ol>
  +
</ol>
  +
  +
===Use <code>Scraper</code> class and test===
  +
<ol>
  +
<li>Finally, in the driver script loop through the list of articles and send each article to the <code>Scraper</code> class you created to fill the corpus with articles. Have a look at the various <code>scrp-*.py</code> scripts currently available to get a feel for how to use the <code>Scraper</code> class. The code below demonstrates the basic idea.</li>
  +
<ul>
  +
<li>Create a new <code>Writer</code> object in order to save scraped articles. The default writing interval is 60 seconds. To change the it add the amount of seconds as a parameter when calling <code>Writer()</code>. For example if we want to write the data every 30 seconds call <code>Writer(30)</code>.</li>
  +
<li>Make sure to set the correct language code when setting up the <code>Source</code> class.</li>
  +
<li>Catch exceptions that occur during scraping but don't fail silently. You don't want a single badly formatted article to stop the entire process.</li>
  +
<li>Remeber to call <code>Writer</code> object's <code>close()</code> function before closing the scraper. It will make sure all the data has been saved correctly.</li>
  +
<li>Put the main code inside a <code>try</code> structure and <code>expect</code> a <code>KeyboardInterrupt</code>, in order to save all the scraped data when user does a keyboard interrupt (^C).</li>
  +
</ul>
  +
<code><pre>
  +
w = Writer()
  +
  +
# Create a SIGTERM signal handler in order to save all the article data if kill command is called.
  +
def term_handler(sigNum, frame):
  +
w.close()
  +
sys.exit(0)
  +
  +
signal.signal(signal.SIGTERM, term_handler)
  +
  +
try:
  +
for (title, url, date) in articles:
  +
try:
  +
source = Source(url, title=title, date=date, scraper=ScraperAzadliq, conn=conn) #replace scraper with the one you created earlier
  +
source.makeRoot("./", ids=ids, root=root, lang="aze") #replace language with the appropriate one
  +
source.add_to_archive()
  +
if ids is None:
  +
ids = source.ids
  +
if root is None:
  +
root = source.root
  +
except Exception as e:
  +
print(url + " " + str(e))
  +
except KeyboardInterrupt:
  +
print("\nReceived a keyboard interrupt. Closing the program.")
  +
w.close()
  +
</pre></code>
  +
<li>Scrape a sufficient amount of test articles to determine whether there is any extraneous output in the generated corpus (check the XML file created). If you discover that something is wrong, check the <code>scraped()</code> function again to make sure that you've removed all the bad elements.</li>
  +
<ul>
  +
<li>Make sure the article IDs generated are unique.</li>
  +
<li>Make sure the URL for each entry corresponds to the article's ID, its title and its publication date.</li>
  +
</ul>
  +
</ol>
  +
  +
=== RFERL ===
  +
If you are scraping [[RFERL_corpora|RFERL content]], you will need category names and numbers of only the real content categories (e.g. news, politics).
  +
  +
Consider the example URL "http://example.com/archive/news/20131120/330/330.html" which often resembles the URL structure in RFERL websites. This URL yields a page with a list of articles you can scrape. Following is a summary of the URL's parts:
  +
  +
* "archive" indicates that you are exploring the site's archive, the goal of scraping.
  +
* "news" indicates the category of the articles which this URL yields; varying this could allow you to access different sections of the site.
  +
* "20131120" indicates that the publication date of the articles displayed will be "11/20/2013". Varying this part of the URL in your scraping program will allow you to scrape a given date range.
  +
* "330" is the identifier this website uses for the "news" category. In this case it's duplicated; however, this might not always be the case.
   
 
== Issues with newlines ==
 
== Issues with newlines ==
 
'''Problem:''' The characters "& # 1 3 ;" (spaced apart intentionally) appear throughout after scraped content is written to .xml file.<br />
 
'''Problem:''' The characters "& # 1 3 ;" (spaced apart intentionally) appear throughout after scraped content is written to .xml file.<br />
'''Research:''' Retrieving the page html through using either <code>curl</code> or <code>wget</code> results in the problematic characters not appearing in final .xml output, however they reappear when the html is downloaded through a Python HTTPConnection. Since furthermore the characters are not present in other preceding output of the page html, it can be intelligently assumed that the error occurs with lxml: <code>lxml.html.document_fromstring(lxml.html.clean.clean_html(lxml.html.tostring(doc.find_class('zoomMe')[1]).decode('utf-8')))</code>. Directly following this step, the characters appear in the xml output. However, that still leaves uncertain the discrepancy between manually downloaded code and python downloaded code. This difference is likely due to <code>curl</code> and <code>wget</code> treating the code differently than python does. This can be painlessly confirmed with a <code> diff</code> command which confirms that most (i.e. 95%) of the discrepancies are whitespace. The characters represent "\r", the carriage return. [http://stackoverflow.com/questions/1459170/what-is-13 Online research] shows that these problems can be attributed to Windows being stupid: "When you code in windows, and use "DOS/Windows" line endings, the your lines will end like this "\r\n". In some xhtml editors, that "\r" is illegal so the editor coverts it to "& # 1 3"."<br /> Accordingly, running scrp-azzatyk.py shows that the offending characters unilaterally appear following the end of lines in the HTML.
+
'''Research:''' Retrieving the page html through using either <code>curl</code> or <code>wget</code> results in the problematic characters not appearing in final .xml output, however they reappear when the html is downloaded through a Python HTTPConnection. Since furthermore the characters are not present in other preceding output of the page html, it can be intelligently assumed that the error occurs with lxml: <code>lxml.html.document_fromstring(lxml.html.clean.clean_html(lxml.html.tostring(doc.find_class('zoomMe')[1]).decode('utf-8')))</code>. Directly following this step, the characters appear in the xml output. However, that still leaves uncertain the discrepancy between manually downloaded code and python downloaded code. This difference is likely due to <code>curl</code> and <code>wget</code> treating the code differently than python does. This can be painlessly confirmed with a <code> diff</code> command which confirms that most (i.e. 95%) of the discrepancies are whitespace. The characters represent "\r", the carriage return. [http://stackoverflow.com/questions/1459170/what-is-13 Online research] shows that these problems can be attributed to Windows being incompatible with Linux\Unix standards: "When you code in windows, and use "DOS/Windows" line endings, the your lines will end like this "\r\n". In some xhtml editors, that "\r" is illegal so the editor coverts it to "& # 1 3"." Accordingly, running scrp-azzatyk.py shows that the offending characters unilaterally appear following the end of lines in the HTML.<br />
'''Suggested Solution:''' The simplest solution is to manually remove the "\r" from raw html after download, like so: <code>res.read().decode('utf-8').replace('\r',' ')</code>. This should have no side effects for two reasons. One, HTML generally ignores conventional whitespace. Two, each "\r" is likely followed by a "\n", so replacing "\r" with nothing will only remove extraneous characters while otherwise preserving whitespace. This will solve the problem because the problematic characters represent "\r". Unfortunately, at this point I see no other more elegant solution. This could be reported to lxml as a bug.<br /><br />
+
'''Suggested Solution:''' The simplest solution is to manually remove the "\r" from raw html after download, like so: <code>res.read().decode('utf-8').replace('\r',' ')</code>. This should have no side effects for two reasons. One, HTML generally ignores conventional whitespace. Two, each "\r" is likely followed by a "\n", so replacing "\r" with nothing will only remove extraneous characters while otherwise preserving whitespace. This will solve the problem because the problematic characters represent "\r". This type of a solution to this seemingly not uncommon problem has been utilized by others and will ensure compatibility with Windows style "\r\n".This "solution" has been implemented. <br /><br />
 
'''Problem:''' The character "x" appears throughout after scraped content is written to .xml file.<br />
 
'''Problem:''' The character "x" appears throughout after scraped content is written to .xml file.<br />
'''Research & Solution:''' The problem was a small error due to not filtering out a bad class in ScraperAzattyk, the problem has been fixed and will be committed upon request since there are lots of other small changes to be reviewed.<br />
+
'''Research & Solution:''' The problem was a small error due to not filtering out a bad class in ScraperAzattyk, the problem has been fixed and will be committed. This solution has been committed.<br /><br />
'''Problem:'''Paragraphs are not always being created correctly in scraped content, i.e. breaks tags are occasionally ignored<br />
+
'''Problem:''' Paragraphs are not always being created correctly in scraped content, i.e. breaks tags are occasionally ignored<br />
'''Research:''' Testing shows that the problem is occurring when two break tags are present on two separate lines and they are directly followed by another tag, generally an <code>em</code> or a <code>strong</code>, however the same problem has been observed with any other tag. In the case that the break tags are seperated by text, lxml properly handles them. However, in the case that they are not, lxml fails to properly recognize the break tags. [https://pastee.org/ev5dc Test script], [https://pastee.org/pp6h4 Test HTML] <br />
+
'''Research:''' Testing shows that the problem is occurring when two break tags are present on two separate lines and they are directly followed by another tag, generally an <code>em</code> or a <code>strong</code>, however the same problem has been observed with other tags. In the case that the break tags are seperated by text, lxml properly handles them. However, in the case that they are not, lxml fails to properly recognize the break tags. [https://pastee.org/ev5dc Test script], [https://pastee.org/pp6h4 Test HTML] <br />
'''Suggested Solution:''' Submit a bug report to lxml. We could create [http://lxml.de/element_classes.html custom Element classes]? I'm fairly sure that even if we managed to do that, it would be fairly inelegant.<br />
+
'''Suggested Solution:''' Submit a bug report to lxml. We could create [http://lxml.de/element_classes.html custom Element classes]? I'm fairly sure that even if we managed to do that, it would be fairly inelegant. A [https://bugs.launchpad.net/lxml/+bug/1095945 bug report] has been filed. Turns out that the bug was in libxml2 rather than lxml and was addressed in a newer version of libxml2 (check the bug report)<br />
  +
  +
[[Category:Documentation in English]]

Latest revision as of 05:20, 29 March 2019

This page outlines how to develop a scraper of news websites for Apertium using our news scraper. The code can be found on GitHub at https://github.com/apertium/apertium-news-scrapers.

How-To[edit]

Get to know the website[edit]

  1. Visit the website which you plan to scrape and locate the archive section which usually offers an interface to select a given day and see a list of links to articles published on that day.
    • If you can't understand the language the website is written in, ask for help in IRC or use a translator and look for a section marked "Archive". If you're unable to locate an archive, find the sitemap and use it as a starting point.
    • Sometimes you'll be able to locate a calendar that links to a page with articles from each date, often an optimal situation.
  2. Familiarize yourself with the structure of the URL and how manipulating it will yield a different set of articles to scrape.
    • Try to only scrape pages that will be useful. For example, scraping a picture gallery will yield few words so concentrate on scraping the more densely packed articles such as the news.
    • The URL will sometimes contain a date which can be manipulated to yield all the articles published on a certain day. (e.g. http://example.com/archive/news/20121104.html - "20121104" indicates that this URL will show a list of articles published on 04/11/2012)
    • Other common configurations include having a sequential number which marks pages of articles chronologically. For example, the latest articles have a URL containing "1" and older ones "2", etc. (e.g. http://example.com/archive/news/343.html - "343" indicates that this will show the 343rd page of the list of news articles)
    • Devise a URL template that you can use string substitutions on to construct a list of URLs to lists of article links.

Get a list of articles[edit]

  1. Write a driver script named scrp-*.py which will given a certain range of dates (or other parameters depending on the site's structure, e.g. how many pages of articles to scrape if there is no calendar support), be able to generate, for example, a list of tuples containing the article's link, its title and its publication date.
    • LXML and BeautifulSoup are two useful tools for scraping HTML.
    • Use Chrome/Firefox's Developer Console with Inspect Element to find distinguishing characteristics for each article link element. For example, each article link could be wrapped in a div with .articleLink (it's not always that obvious).
    • Using LXML offers many choices when extracting the article info from the page, from picking specific CSS classes to arbitrary XPath expressions.
    • If you find that selecting all the article info requires a more complex CSS selector, use a CSS to XPath converter. For example, consider a situation where the link tag to each article has the articleLink class. A possible CSS selector for this would be .articleLink which would become descendant-or-self::*[contains(concat(' ', normalize-space(@class), ' '), ' articleLink ')] if converted into an XPath expression. An example using LXML with each expression is demonstrated below
    • rawArticlesHtml = getPage(conn, url, rawContent = True)
      articlesHtml = lxml.html.fromstring(rawArticlesHtml)
      articleTags = articlesHtml.xpath("descendant-or-self::*[contains(concat(' ', normalize-space(@class), ' '), ' articleLink ')]") #XPath Method
      articleTags = articlesHtml.find_class("articleLink") #CSS Selector Method
      
    • As you populate the article list, writing the list to a file is useful for debugging. Outputting it to the console could fail due to character encoding issues. Look below for a useful helper method that does either depending on the display parameter.
      def printArticles(articlesData, fileName, display=False):
      	if display:
      		for (title, url, date) in articlesData:
      			print(title, url, date.isoformat())
      	else:
      		with open(fileName, 'a', encoding='utf-8') as file:
      			for (title, url, date) in articlesData:
      				file.write("%s, %s, %s\n" % (title, url, date.isoformat()))
      
    • Don't worry too much about accidentally populating the article list with duplicate URLs, the scraper is designed to ignore duplicate articles (as long as you implement the url_to_aid() function correctly later).

Add a Scraper class[edit]

  1. Add an entry to the feed_sites dictionary in scraper_classes.py which maps from the name of the website to a unique Scraper class.
  2. Define a new class in scrapers.py that inherits the Scraper class with two functions: url_to_aid() and scraped() with very important specifications.
    1. url_to_aid(): This function will take as an input and convert it to a unique "article id" (aid).
      • Many sites will use a unique ID inside their article URLs (e.g., http://example.com/news?id=3141592653 or http://example.com/news/3141592653.html), these are fairly simple to extract using a regex or string splitting.
      • However, if this is for some reason not unique, or the site doesn't use unique ids, or if it's difficult to extract for some reason, it's okay to make a hash of the full url (which should be unique...).
      • There are examples of both of these methods implemented in other scrapers in scrapers.py. Take a look if you get stuck.
    2. scraped(): This function will take as "input" the HTML contents of the article and output a cleaned version of the article's text for inclusion in the XML corpus.
      • First, fill self.doc with the contents of the page, by calling self.get_content(). This is all written for you already, so just call the function once and you're ready for the hard stuff.
      • Now, LXML/BeautifulSoup will be very useful for scraping the actual article content from the HTML of the entire page.
      • Most likely, the article text will be wrapped in some sort of an identifiable container, so follow a similar procedure to that which proved useful when populating the list of articles, and identify this element.
      • Take the element which contains the article content, extract it from the HTML, and then clean it with LXML (to remove scripts, etc. which shouldn't be in the corpus).
      • The cleaning procedure below often suffices to remove all the HTML tags, changing break tags and paragraph tags into line breaks as necessary.
      • self.get_content()
        cleaned = lxml.html.document_fromstring(lxml.html.clean.clean_html(lxml.html.tostring(self.doc.xpath("//div[@align='justify']")[0]).decode('utf-8')))
        cleaned = cleaned.text_content()
        return cleaned.strip()
        
      • Sometimes, this won't suffice and you'll have to be able to identify the offending elements and remove them manually from the HTML before invoking LXML's clean.

Use Scraper class and test[edit]

  1. Finally, in the driver script loop through the list of articles and send each article to the Scraper class you created to fill the corpus with articles. Have a look at the various scrp-*.py scripts currently available to get a feel for how to use the Scraper class. The code below demonstrates the basic idea.
    • Create a new Writer object in order to save scraped articles. The default writing interval is 60 seconds. To change the it add the amount of seconds as a parameter when calling Writer(). For example if we want to write the data every 30 seconds call Writer(30).
    • Make sure to set the correct language code when setting up the Source class.
    • Catch exceptions that occur during scraping but don't fail silently. You don't want a single badly formatted article to stop the entire process.
    • Remeber to call Writer object's close() function before closing the scraper. It will make sure all the data has been saved correctly.
    • Put the main code inside a try structure and expect a KeyboardInterrupt, in order to save all the scraped data when user does a keyboard interrupt (^C).
    w = Writer()
    
    # Create a SIGTERM signal handler in order to save all the article data if kill command is called. 
    def term_handler(sigNum, frame):
    	w.close()
    	sys.exit(0)
    
    signal.signal(signal.SIGTERM, term_handler)
    
    try:
    	for (title, url, date) in articles:
    		try:
    			source = Source(url, title=title, date=date, scraper=ScraperAzadliq, conn=conn) #replace scraper with the one you created earlier
    			source.makeRoot("./", ids=ids, root=root, lang="aze") #replace language with the appropriate one
    			source.add_to_archive()
    			if ids is None:
    				ids = source.ids
    			if root is None:
    				root = source.root
    		except Exception as e:
    			print(url + " " + str(e))
    except KeyboardInterrupt:
    	print("\nReceived a keyboard interrupt. Closing the program.")
    w.close()
    
  2. Scrape a sufficient amount of test articles to determine whether there is any extraneous output in the generated corpus (check the XML file created). If you discover that something is wrong, check the scraped() function again to make sure that you've removed all the bad elements.
    • Make sure the article IDs generated are unique.
    • Make sure the URL for each entry corresponds to the article's ID, its title and its publication date.

RFERL[edit]

If you are scraping RFERL content, you will need category names and numbers of only the real content categories (e.g. news, politics).

Consider the example URL "http://example.com/archive/news/20131120/330/330.html" which often resembles the URL structure in RFERL websites. This URL yields a page with a list of articles you can scrape. Following is a summary of the URL's parts:

  • "archive" indicates that you are exploring the site's archive, the goal of scraping.
  • "news" indicates the category of the articles which this URL yields; varying this could allow you to access different sections of the site.
  • "20131120" indicates that the publication date of the articles displayed will be "11/20/2013". Varying this part of the URL in your scraping program will allow you to scrape a given date range.
  • "330" is the identifier this website uses for the "news" category. In this case it's duplicated; however, this might not always be the case.

Issues with newlines[edit]

Problem: The characters "& # 1 3 ;" (spaced apart intentionally) appear throughout after scraped content is written to .xml file.
Research: Retrieving the page html through using either curl or wget results in the problematic characters not appearing in final .xml output, however they reappear when the html is downloaded through a Python HTTPConnection. Since furthermore the characters are not present in other preceding output of the page html, it can be intelligently assumed that the error occurs with lxml: lxml.html.document_fromstring(lxml.html.clean.clean_html(lxml.html.tostring(doc.find_class('zoomMe')[1]).decode('utf-8'))). Directly following this step, the characters appear in the xml output. However, that still leaves uncertain the discrepancy between manually downloaded code and python downloaded code. This difference is likely due to curl and wget treating the code differently than python does. This can be painlessly confirmed with a diff command which confirms that most (i.e. 95%) of the discrepancies are whitespace. The characters represent "\r", the carriage return. Online research shows that these problems can be attributed to Windows being incompatible with Linux\Unix standards: "When you code in windows, and use "DOS/Windows" line endings, the your lines will end like this "\r\n". In some xhtml editors, that "\r" is illegal so the editor coverts it to "& # 1 3"." Accordingly, running scrp-azzatyk.py shows that the offending characters unilaterally appear following the end of lines in the HTML.
Suggested Solution: The simplest solution is to manually remove the "\r" from raw html after download, like so: res.read().decode('utf-8').replace('\r',' '). This should have no side effects for two reasons. One, HTML generally ignores conventional whitespace. Two, each "\r" is likely followed by a "\n", so replacing "\r" with nothing will only remove extraneous characters while otherwise preserving whitespace. This will solve the problem because the problematic characters represent "\r". This type of a solution to this seemingly not uncommon problem has been utilized by others and will ensure compatibility with Windows style "\r\n".This "solution" has been implemented.

Problem: The character "x" appears throughout after scraped content is written to .xml file.
Research & Solution: The problem was a small error due to not filtering out a bad class in ScraperAzattyk, the problem has been fixed and will be committed. This solution has been committed.

Problem: Paragraphs are not always being created correctly in scraped content, i.e. breaks tags are occasionally ignored
Research: Testing shows that the problem is occurring when two break tags are present on two separate lines and they are directly followed by another tag, generally an em or a strong, however the same problem has been observed with other tags. In the case that the break tags are seperated by text, lxml properly handles them. However, in the case that they are not, lxml fails to properly recognize the break tags. Test script, Test HTML
Suggested Solution: Submit a bug report to lxml. We could create custom Element classes? I'm fairly sure that even if we managed to do that, it would be fairly inelegant. A bug report has been filed. Turns out that the bug was in libxml2 rather than lxml and was addressed in a newer version of libxml2 (check the bug report)