PINGDOM_CHECK

How Scrapy makes web crawling easy

Read Time
5 Mins
Posted on
July 27, 2021
If you are interested in web scraping as a hobby or you might already have a few scripts extracting data but are not familiar with Scrapy then this article is meant for you.
By
Attila Toth
×

Try Zyte API

Zyte proxies and smart browser tech rolled into a single API.
Return to top

How Scrapy makes web crawling easy

If you are interested in web scraping as a hobby or you might already have a few scripts extracting data but are not familiar with Scrapy then this article is meant for you.

I’ll go quickly over the fundamentals of Scrapy and why I think it’s the right choice when it comes to scraping at scale.

I hope you’ll see the value you can get quickly with this awesome framework and that you’ll be interested in learning more and consider it for your next big project.

What is Scrapy?

Scrapy is a web scraping framework written in Python. You can leverage Python’s rich data science ecosystem along with Scrapy, which makes development a lot easier.

While the introduction does it justice, this short article aims to show you how much value you can get out of Scrapy and aims to introduce you to a couple of its fundamental concepts. This is not an introduction to web scraping or to Python, so I’ll assume you have basic knowledge of the language and at least an understanding of how HTTP requests work.

How does Scrapy compare with other popular options?

If you did a Python web scraping tutorial before, chances are you’ve run into the BeautifulSoup and requests libraries. These offer a fast way to extract data from web pages but don’t provide you with the project structure and sane defaults that Scrapy uses for most tasks. You’d have to handle redirects, retries, cookies, and more on your own while Scrapy handles these out of the box.

You may think you can get away with a headless browser such as Selenium or Puppeteer, after all, that would be much harder to block. Well, the truth is you could get away with spending a lot fewer resources, which will take a toll if you have hundreds or thousands of scrapers.

How do you set up Scrapy?

Scrapy is a Python package like any other. You can install with pip in your virtualenv like so:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
$ pip install scrapy
$ pip install scrapy
$ pip install scrapy

The two concepts you need to understand are the Scrapy project and the spider. A project wraps multiple spiders and you can think of a spider as a scraping configuration for a particular website. After installing, you can start a project like so:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
$ scrapy startproject myprojectname
$ scrapy startproject myprojectname
$ scrapy startproject myprojectname

A project will encapsulate all your spiders, utilities, and even the deployment configs.

How do you scrape a simple webpage?

A spider handles everything needed for a particular website. It will yield requests to web pages and receive back responses. Its duty is to then process these responses and yield either more requests or data.

In actual Python code, a spider is no more than a Python class that inherits from

scrapy.Spider
scrapy.Spider. Here’s a basic example:
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import scrapy
class MySpider(scrapy.Spider):
name = 'zyte_blog'
start_urls = ['https://zyte.com/blog']
def parse(self, response):
for href in response.css('div.post-header h2 a::attr(href)').getall():
yield scrapy.Request(href)
yield scrapy.Request(
url=response.css('a.next-posts-link::attr(href)').get(),
callback=self.parse_blog_post,
)
def parse_blog_post(self, response):
yield {
'url': response.url,
'title': response.css('span#hs_cos_wrapper_name::text').get(),
}
import scrapy class MySpider(scrapy.Spider): name = 'zyte_blog' start_urls = ['https://zyte.com/blog'] def parse(self, response): for href in response.css('div.post-header h2 a::attr(href)').getall(): yield scrapy.Request(href) yield scrapy.Request( url=response.css('a.next-posts-link::attr(href)').get(), callback=self.parse_blog_post, ) def parse_blog_post(self, response): yield { 'url': response.url, 'title': response.css('span#hs_cos_wrapper_name::text').get(), }
import scrapy

class MySpider(scrapy.Spider):
    name = 'zyte_blog'

    start_urls = ['https://zyte.com/blog']

    def parse(self, response):
        for href in response.css('div.post-header h2 a::attr(href)').getall():
            yield scrapy.Request(href)

        yield scrapy.Request(
            url=response.css('a.next-posts-link::attr(href)').get(),
            callback=self.parse_blog_post,
        )

    def parse_blog_post(self, response):
        yield {
            'url': response.url,
            'title': response.css('span#hs_cos_wrapper_name::text').get(),
        }

The

start_urls
start_urls is a list of URLs to start scraping from. Each will yield a request whose response will be received in a callback. The default callback is
parse
parse. As you can see, callbacks are just class methods that process responses and yield more requests or data points.

How do you extract data points from HTML with Scrapy?

You can use Scrapy's selectors! There are CSS selectors available directly on the

response
response object for this:
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
link = response.css('a.next-posts-link::attr(href)').get() # extract using class
title = response.css('span#hs_cos_wrapper_name::text').get() # extract using id
link = response.css('a.next-posts-link::attr(href)').get() # extract using class title = response.css('span#hs_cos_wrapper_name::text').get() # extract using id
link = response.css('a.next-posts-link::attr(href)').get()  # extract using class
title = response.css('span#hs_cos_wrapper_name::text').get()  # extract using id

There are also XPath selectors, which offer more powerful options that you’ll most likely need. Here are the same selectors using XPath:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
link = response.xpath('//a[contains(@class, "next-posts-link")]/a/@href').get() # extract using class
title = response.xpath('//span[@id="hs_cos_wrapper_name"]/text()').get() # extract using id
link = response.xpath('//a[contains(@class, "next-posts-link")]/a/@href').get() # extract using class title = response.xpath('//span[@id="hs_cos_wrapper_name"]/text()').get() # extract using id
link = response.xpath('//a[contains(@class, "next-posts-link")]/a/@href').get()  # extract using class
title = response.xpath('//span[@id="hs_cos_wrapper_name"]/text()').get()  # extract using id

Next, you’ll need a way to return your data into a parsable format. There are powerful utilities, such as items and item loaders, but in its simplest form, you can store your data into Python dictionaries:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
yield {
'url': response.url,
'title': response.css('span#hs_cos_wrapper_name::text').get(),
}
yield { 'url': response.url, 'title': response.css('span#hs_cos_wrapper_name::text').get(), }
yield {
    'url': response.url,
    'title': response.css('span#hs_cos_wrapper_name::text').get(),
}

How do you run a Scrapy spider?

In your project directory, using the above example project, you can run:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
$ scrapy crawl zyte_blog
$ scrapy crawl zyte_blog
$ scrapy crawl zyte_blog

This will display the scraped data to the standard output along with a lot of logging but you can easily redirect only the actual data to CSV or to JSON format by adding a couple more options:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
$ scrapy crawl zyte_blog -o blog_posts.csv
$ scrapy crawl zyte_blog -o blog_posts.csv
$ scrapy crawl zyte_blog -o blog_posts.csv

Contents of CSV file:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
url,title
https://zyte.com/blog/how-to-get-high-success-rates-with-proxies-3-steps-to-scale-up,How to Get High Success Rates With Proxies: 3 Steps to Scale Up
https://zyte.com/blog/data-center-proxies-vs.-residential-proxies,Data Center Proxies vs. Residential Proxies
https://zyte.com/blog/price-intelligence-questions-answered,Your Price Intelligence Questions Answered
url,title https://zyte.com/blog/how-to-get-high-success-rates-with-proxies-3-steps-to-scale-up,How to Get High Success Rates With Proxies: 3 Steps to Scale Up https://zyte.com/blog/data-center-proxies-vs.-residential-proxies,Data Center Proxies vs. Residential Proxies https://zyte.com/blog/price-intelligence-questions-answered,Your Price Intelligence Questions Answered …
url,title
https://zyte.com/blog/how-to-get-high-success-rates-with-proxies-3-steps-to-scale-up,How to Get High Success Rates With Proxies: 3 Steps to Scale Up
https://zyte.com/blog/data-center-proxies-vs.-residential-proxies,Data Center Proxies vs. Residential Proxies
https://zyte.com/blog/price-intelligence-questions-answered,Your Price Intelligence Questions Answered
…

How to deal with getting blocked?

Scrapy makes it easy to manage complex session logic. As you add more spiders and your project gets more complex, Scrapy allows you to prevent bans in various ways.

The most basic way to tweak your requests is to set headers. For example, you can add an

Accept
Accept header like so:
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
scrapy.Request(url, headers={'accept': '*/*', 'user-agent': 'some user-agent value'})
scrapy.Request(url, headers={'accept': '*/*', 'user-agent': 'some user-agent value'})
scrapy.Request(url, headers={'accept': '*/*', 'user-agent': 'some user-agent value'})

You may think already that there must be a better way of setting this than doing it for each individual request, and you’re right! Scrapy lets you set default headers and options for each spider like this:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
custom_settings = {
'DEFAULT_REQUEST_HEADERS': {'accept': '*/*'},
'USER_AGENT': 'some user-agent value',
}
custom_settings = { 'DEFAULT_REQUEST_HEADERS': {'accept': '*/*'}, 'USER_AGENT': 'some user-agent value', }
custom_settings = {
    'DEFAULT_REQUEST_HEADERS': {'accept': '*/*'},
    'USER_AGENT': 'some user-agent value',
}

This can either be set on individual spiders or in your

settings.py
settings.py file which Scrapy defines for you.

But wait… There’s more!

You can also use middlewares to do this. These can be used across spiders to add headers and more.

Middlewares are another powerful feature of Scrapy because they allow you to do things like handling redirects, retries, cookies, and more. And that’s just what Scrapy has out of the box! 

Using middlewares you can respect robots.txt configurations for particular websites to ensure that you don’t crawl something you shouldn’t. 

How to be “kind” while scraping?

Web scraping can take a toll on the website which is not what you intended. To scrape nicely, you’ll need to add sane delays between your requests. You can easily do that using the existing automatic throttling middleware

You can also set an interval so you don’t look like a bot by requesting precisely every 2 seconds but yielding a request anywhere from 1 to 5 seconds!

How does Scrapy handle proxies?

There are many ways to work with proxies in Scrapy. You can set them for individual requests like so:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
scrapy.Request(
url,
meta={'proxy': 'host:port'},
)
scrapy.Request( url, meta={'proxy': 'host:port'}, )
scrapy.Request(
    url,
    meta={'proxy': 'host:port'},
)

Or using the existing http proxy middleware, to set it for each individual request.

If you’re using Smart Proxy Manager (or want to) you can use the official middleware to set it up.

How does Scrapy help you process data?

Scrapy also offers you items to help define a structure for your data. Here’s how a simple definition looks:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import scrapy
class BlogItem(scrapy.Item):
title = scrapy.Field()
url = scrapy.Field()
import scrapy class BlogItem(scrapy.Item): title = scrapy.Field() url = scrapy.Field()
import scrapy

class BlogItem(scrapy.Item):
    title = scrapy.Field()
    url = scrapy.Field()

You can use data classes as well!

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
from dataclasses import dataclass
@dataclass
class BlogItem:
title: str
url: str
from dataclasses import dataclass @dataclass class BlogItem: title: str url: str
from dataclasses import dataclass

@dataclass
class BlogItem:
    title: str
    url: str

Item Loaders are the next step for data formatting. To understand where they become useful, you can think of multiple spiders using the same item and requiring the same formatting. For example, stripping spaces of the ‘description’ field and merging the list of strings. They can do some pretty complex stuff!

Pipelines for processing items are also an option. They can be used for filtering duplicate items based on certain fields or add/validate computed values (such dropping items based on timestamp).

Read more

This was just an overview, there are multiple other features included directly in Scrapy as well as many extensions, middlewares, and pipelines created by the community.

Here’s a shortlist of resources you may be interested in:

Scrapy is a mature open source project with many active contributors and has been around for years.

It’s well supported so you’ll find documentation and tutorials for almost everything you can think of and there are lots of plugins developed by the community.

×

Try Zyte API

Zyte proxies and smart browser tech rolled into a single API.