ScrapyRT: Turn websites into real-time APIs
If you’ve been using Scrapy for any period of time, you know the capabilities a well-designed Scrapy spider can give you.
With a couple of lines of code, you can design a scalable web crawler and extractor that will automatically navigate to your target website and extract the data you need. Be it e-commerce, article, or sentiment data.
The one issue that traditional Scrapy spiders pose, however, is the fact that in a lot of cases spiders can take a long time to finish their crawls and deliver their data if it is a large job. Making Scrapy spiders unsuitable for near real-time web scraping. A growing and very useful application for many data aggregation and analysis efforts.
With the growth of data-based services and data-driven decision making, end-users are increasingly looking for ways to extract data on demand from web pages instead of having to wait for data from large periodic crawls.
And that’s where ScrapyRT comes in…
Enter ScrapyRT
Originally evolving out of a Zyte for Google Summer of Code project in 2014, ScrapyRT (Scrapy Realtime) is an open-source Scrapy extension that enables you to control Scrapy spiders with HTTP requests.
Simply send your Scrapy HTTP API a request containing the Scrapy Request Object (with URL and callback as parameters) and the API will return the extracted data by the spider in real-time. No need to wait for the entire crawl to complete.
Usually, spiders are run for long periods of time, and proceed step by step, traversing the web from a starting point and extracting any data that matches their extraction criteria.
This mode of operation is great if you don’t know the location of your desired data. However, if you know the location of the data then there is a huge amount of redundancy if the spider has to complete all the intermediary steps.
ScrapyRT allows you to schedule just one single request with a spider, parse it in a callback, and get response returned immediately as JSON instead of having the data saved in a database.
By default spider’s start_requests spider method is not executed and the only request that is scheduled with a spider is Request generated from API params.
How ScrapyRT Works?
ScrapyRT’s architecture is very simple. It is a webserver written in Python Twisted tied with a custom Crawler object from Scrapy.
Twisted is one of the most powerful Python asynchronous frameworks, so was a natural choice for ScrapyRT as Twisted works great for asynchronous crawling, and Scrapy uses Twisted for all HTTP traffic. Ensuring easy integration with Scrapy.
Once added to your project, ScrapyRT runs as a web service, retrieving data when you make a request containing the URL you want to extract data from and the name of the spider you would like to use.
ScrapyRT will then schedule a request in Scrapy for the URL specified and use the ‘foo’ spider’s parse method as a callback. The data extracted from the page will be serialized into JSON and returned in the response body. If the spider specified doesn’t exist, a 404 will be returned.
The ScrapyRT web server is customizable and modular. You can easily override GET and POST handlers. This means that you can add your own functionality to the project - you can write your own Twisted Resources inheriting from main ScrapyRT handlers, for example, you can write code to return response in XML or HTML instead of JSON, and add it to configuration.
One thing to keep in mind is that ScrapyRT was not developed with long crawls in mind.
Remember: after sending a request to ScrapyRT you have to wait for the spider to finish before you get a response.
So if the request requires the spider to crawl an enormous site and generates 1 million requests in a callback, then ScrapyRT isn’t the best option for you as you will likely have to sit in front of a blank screen waiting for your crawl to finish and return the item.
One possible way of solving this problem would involve modifying ScrapyRT so that it could use WebSockets or HTTP push notifications - this way API could send items as they arrive in API.
Currently, data from a spider is returned in response to an initial request - so after sending each request you have to wait for the response until data is returned by a spider. If you expect that your spider will generate lots of requests in callback but you actually don’t need all of them you can limit the amount of requests by passing the max_requests parameter to Scrapy.
If you would like to learn more about ScrapyRT or contribute to the open-source project, then check out the ScrapyRT documentation and GitHub repository.
Your data extraction needs
At Zyte we specialize in turning unstructured web data into structured data. If you have a need to start or scale your web scraping projects then our Solution architecture team is available for a free consultation, where we will evaluate and develop the architecture for a data extraction solution to meet your data and compliance requirements.
At Zyte we always love to hear what our readers think of our content and would be more than interested in any questions you may have. So please, leave a comment below with your thoughts and perhaps consider sharing what you are working on right now!