My struggle to download every Project Gutenberg book in English
UPDATE: My original issue was:
- I wanted to download all gutenberg books in English
- I wanted to download them all in text format only
- Gutenberg offered me a simple way to do this in a one-liner
- It doesn't work
Unfortunately that original problem hasn't been solved and I still don't have a way to download only English books, but people have been very helpful and I now know a lot more. Data hoarders you should read below and start mirroring … if you have the space!
I wanted to do this for a particular project, not just the hoarding, but let's just say we want to do this.
Let's also say to make it simple we're going to download only .txt
versions of the books.
Gutenberg have a page telling you you're allowed to do this using wget
with a 2-second waiting list between requests, and it gives the command as
wget -w 2 -m -H "http://www.gutenberg.org/robot/harvest?filetypes[]=txt&langs[]=en"
now I believe this is supposed to get a series of HTML pages (following a "next page" link every time), which have in them links to zip files, and download not just the pages but the linked zip files as well. Does that seem right?
This did not work for me. I have tried various options with the -A
flag but it didn't download the zips.
So, OK, moving on, what I do have is 724 files (with annoying names because wget
can't custom-name them for me), each containing 200-odd links to zip files like this:
<a rel="nofollow" href="http://aleph.gutenberg.org/1/0/0/3/10036/10036-8.zip">http://aleph.gutenberg.org/1/0/0/3/10036/10036-8.zip</a>
So we can easily grep those out of the files and get a list of the zipfile URLs, right?
egrep -oh 'http://aleph.gutenberg.org/[^"<]+' * | uniq > zipurls.txt
Using uniq
there because every URL appears twice, in the text and in the HREF attribute.
So now we have a huge list of the zip file URLs and we can get them with wget
using the --input-list
option:
wget -w 2 --input-file=zipurls.txt
this works, except … some of the files aren't there.
If you go to this URL in a browser:
http://aleph.gutenberg.org/1/0/0/3/10036/
you'll see that 10036-8.zip isn't there. But there's an old
folder. It's in there. What does the 8 mean? I think it means UTF-8 encoding and I might be double-downloading— getting the same files twice in different encodings. What does the old
mean? Just … old?
So now I'm working through the list, not with wget
but with a script which is essentially this:
try to get the file
if the response is a 404
add 'old' into the URL and try again
How am I doing? What have I missed? Are we having fun yet?