PINGDOM_CHECK

Data extraction with Scrapy and Python 3

Read Time
3 Mins
Posted on
May 25, 2016
Fasten your seat belts, ladies and gentlemen: Scrapy 1.1 with Python 3 support is officially out! After a couple of months of hard work and four release candidates, this is the first official Scrapy release to support Python 3.
By
Valdir Stumm Junior
×

Try Zyte API

Zyte proxies and smart browser tech rolled into a single API.
Return to top

Data extraction with Scrapy and Python 3

Scrapy 1.1 release with official Python 3 support

Fasten your seat belts, ladies and gentlemen: Scrapy 1.1 with Python 3 support is officially out! After a couple of months of hard work and four release candidates, this is the first official Scrapy release to support Python 3.

twitter scrapy 1.1

We know that many of you have been eagerly looking forward to moving your whole stack to Python 3. Well, wait no more, you can get rid of Python 2 once and for all (for the most part)!

Without further ado, let's dive into the nuts and bolts of this latest step forward.

What's new?

Python 3 support isn’t the only good news coming from this release. There are a few features and general improvements that you might want to be aware of:

  • Item Loaders now support nested loaders
  • `response.text` now holds the response body as unicode, while `response.body` holds the byte string version
  • `FormRequest.from_response` now accepts two new arguments: `formcss` and `formid`
  • Better HTTPS support: HTTPS downloader now does TLS protocol negotiation by default.
  • Scrapy now supports sending non-ASCII email, although sending emails is only supported by Scrapy running on Python 2
  • Projects created using Scrapy 1.1 automatically respect robots.txt
    • If you want to disable this for whatever reason, set ROBOTSTXT_OBEY to False
  • Scrapy now supports anonymous S3 connections
  • Non-ASCII URLs are now better supported and handled closer to what browsers do. (note that there's an open issue for link extraction)

Check out the release notes for a complete list of changes.

How to install

You can Install or upgrade Scrapy in your environment by running:

$ pip install scrapy --upgrade

You can create a Python 3 virtualenv for Scrapy (e.g. using virtualenvwrapper):

$ mkvirtualenv --python=/usr/bin/python3 scrapy11.py3
 (scrapy11.py3) $ pip install scrapy

Limitations using Python 3

Scrapy on Python 3 doesn't work in Windows environments yet. Scrapy depends on Twisted and some parts of Twisted haven’t been ported yet. Once this Twisted issue is solved, Scrapy users with Windows will be able to run their spiders on Python 3.

In addition to this, there are a couple of features that are not supported on Python 3:

  • FTP download handler
  • Telnet console
  • Sending emails

Backward incompatible changes

Heads up, Scrapy users, Scrapy 1.1 introduces some minor backward-incompatible changes that might break your existing spiders:

  • Media pipelines:
    • Now when you upload files or images to S3, Scrapy sets them as private instead of public. You can change this behavior via the FILES_STORE_S3_ACL setting.
    • FilesPipeline and ImagesPipeline settings are now instance attributes instead of class attributes. There's a work in progress to solve this. (PR #1989)
  • canonicalize_url() has been changed (for the better). However, it could invalidate some HTTP cache entries you may have from pre-1.1 Scrapy crawls and break some link extractors as well.

A big Thank You to…

We couldn’t have done it without the help of all you folks reporting and fixing issues, requesting and submitting features, commenting on pull requests, improving documentation, etc., the list goes on.

Scrapy is a community and Scrapy 1.1 is the result of this community effort. You should be proud of yourselves. Kudos to all you Scrapy lovers!

We'd also like to nominate and thank everyone (in alphabetical order) who contributed directly to the Scrapy 1.1 source code:

Agustin Castro, Aivars Kalvāns, Alexander Chekunkov, Alexander Sibiryakov, Ally Weir, Andrew Murray, Andrew Scorpil, Aron Bordin, Artur Gaspar, Berker Peksag, Bryan Crowe, Capi Etheriel, Carlos Peña, cclauss, Chris Nilsson, Christian Pedersen, Daniel Collins, Daniel Graña, David Chen, David Tagatac, Demelziraptor, Dharmesh Pandav, dinesh, djunzu, Elias Dorneles, Gregory Vigo Torres, Hoat Le, hy, Jakob de Maeyer, Jamey Sharp, Julia Medina, Konstantin Lopuhin, Lele, Leonid Amirov, Luar Roji, Lucas Moauro, Marco DallaG, Marius Gedminas, Marven Sanchez, mgachhui, Mikhail Korobov, Mikhail Lyundin, nanolab, nblock, Nicolas Pennequin, Nikola Pavlović, Νικόλαος-Διγενής Καραγιάννης, nyov, Olaf Dietsche, orangain, Pablo Hoffman, palego, Panayiotis Lipiridis, Patrick Connolly, Paul Tremberth, Pawel Miech, Pengyu Chen, preetwinder, Rafał Gutkowski, Ralph Gutkowski, Raul Gallegos, Rick, Robert Weindl, Rolando Espinoza, seales, smirecki, Valdir Stumm Jr, Victor Mireyev, Yaroslav Halchenko and Zoltán Szeredi.

Without your efforts, none of this would be have been possible!

Wrap up

Python 3 support has been out in beta release for just a few months. Chances are that there are still some corner cases that have yet to be discovered. If you happen to face any unexpected behavior, please report your findings in the issue tracker.

You can also contribute to the Scrapy community in several ways, such as improving documentation, writing tutorials, fixing bugs, and including new features in Scrapy. Check the Contributions Guideline if you want to engage with this amazing community.

Happy Scraping!

 

×

Try Zyte API

Zyte proxies and smart browser tech rolled into a single API.