We have used texts from Project Gutenberg before. But so far, we have always downloaded the texts manually. We can also do this automatically, which is useful when we want to access larger amounts of texts. (For a single text, it may be faster to do the downloading by hand, since it means we don't have to figure out the naming scheme of the site we are interested in.) To download web content using Python, we can use the Python package urllib at http://docs.python.org/library/urllib.html
The first step is to find out the URL of the file or files that we are interested in. So let's assume we are interested in John Donne, "Devotions Upon Emergent Occasions". Note that the Gutenberg main page does not allow automatic access, as stated at http://www.gutenberg.org/wiki/Gutenberg:Terms_of_Use. But there are mirrors for which automatic access is allowed.
"Devotions Upon Emergent Occasions" by John Donne is located at "ftp://sailor.gutenberg.lib.md.us/gutenberg/2/3/7/7/23772/23772.txt". We can now access it as follows:
As you can see, opening and reading a web page works in almost the same way as opening and reading a local file: We start with
f = urlopen(...)
and then we can access the data with f.read(), as if it were a local file.
Sometimes, we may want to download web content automatically, but would like to store it in files rather than process it directly. The urllib package supports this too:
We can now process this web content just like normal text. As a first step, we will break it up into words. We will use nltk.word_tokenize() rather than split(). Here is an example of how the two differ:
So nltk.word_tokenize() is a bit smarter in its handling of punctuation. Here is how we can tokenize the Gutenberg text that we just downloaded. Afterwards, we can load the result into the nltk.Text() format, and inspect it.
The Gutenberg file we just downloaded was plain text. But a lot of data on the web is in HTML instead.HTML looks a lot like XML, but the tags it uses are pre-defined and are interpreted by browsers as formatting commands.
We can read HTML files using urlopen() again, and use BeautifulSoup to remove some of the HTML. BeautifulSoup is a package that you need to install on your machine before you can use it. You find it at http://www.crummy.com/software/BeautifulSoup/.
By the way: If you wanted to automatically download current news stories from the BBC webpage for processing, how could you do that? They are linked from the BBC main page -- but what are the URLs of these subpages?
Many of you will be dealing with texts in different languages. Internally, characters are encoded through character numbers. Some characters (A-Z, a-z) privileged historically in that they have received shorter encodings. Unicode provides encodings for a huge number of additional alphabets. See https://en.wikipedia.org/wiki/Unicode
Within Python, unicode strings can be handled just like other strings. But for storing in files and display on screen, it is necessary to encode them. To do that, we need the Python codecs package. We will also need to know the encoding that a file uses.
As an example, we use a text in Portuguese, from Project Gutenberg: A Revolução Portugueza: O 31 de Janeiro (Porto 1891) by Francisco Jorge de Abreu, at http://www.gutenberg.org/ebooks/29484
Download the plain text version to a local file. The Project Gutenberg page informs us that the text is encoded in UTF8, a Unicode encoding. We need to specify this as we open the file, in order to decode it:
It is important that we specify the encoding when opening the text. If we do not do that, the assumption is that it is ASCII text. In that case, we cannot later encode it for printing:
More information on working with Unicode in Python is at http://www.evanjones.ca/python-utf8.html
To write Unicode to a file, again use codecs.open():