PINGDOM_CHECK

Meet Parsel: The selector library behind Scrapy

Read Time
3 Mins
Posted on
July 28, 2016
We eat our own spider food since Scrapy is our go-to workhorse on a daily basis. However, there are certain situations where Scrapy can be overkill and that’s when we use Parsel.
By
Valdir Stumm Junior
×

Try Zyte API

Zyte proxies and smart browser tech rolled into a single API.
Return to top

Meet Parsel: The selector library behind Scrapy

We eat our own spider food since Scrapy is our go-to workhorse on a daily basis. However, there are certain situations where Scrapy can be overkill and that’s when we use Parsel. Parsel is a Python library for extracting data from XML/HTML text using CSS or XPath selectors. It powers the scraping API of the Scrapy framework.

HarryParseltongue

Not to be confused with Parseltongue/Parselmouth

We extracted Parsel from Scrapy during Europython 2015 as a part of porting Scrapy to Python 3. As a library, it’s lighter than Scrapy (it relies on lxml and cssselect) and also more flexible, allowing you to use it within any Python program.

v-3

Using Parsel

Install Parsel using pip:

pip install parsel

And here’s how you use it. Say you have this HTML snippet in a variable:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
>>> html = u''' <ul> <li><a href="http://blog.scrapinghub.com">Blog</a></li> ... <li><a href="https://www.scrapinghub.com">Scrapinghub</a></li> ... <li class="external"><a href="http://www.scrapy.org">Scrapy</a></li> </ul> '''
>>> html = u''' <ul> <li><a href="http://blog.scrapinghub.com">Blog</a></li> ... <li><a href="https://www.scrapinghub.com">Scrapinghub</a></li> ... <li class="external"><a href="http://www.scrapy.org">Scrapy</a></li> </ul> '''
>>> html = u''' <ul> <li><a href="http://blog.scrapinghub.com">Blog</a></li> ... <li><a href="https://www.scrapinghub.com">Scrapinghub</a></li> ... <li class="external"><a href="http://www.scrapy.org">Scrapy</a></li> </ul> ''' 

You then import the Parsel library, load it into a Parsel Selector and extract links with an XPath expression:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
>>> import parsel >>> sel = parsel.Selector(html) >>> sel.xpath("//a/@href").extract() [u'http://blog.scrapinghub.com', u'https://www.scrapinghub.com', u'http://www.scrapy.org']
>>> import parsel >>> sel = parsel.Selector(html) >>> sel.xpath("//a/@href").extract() [u'http://blog.scrapinghub.com', u'https://www.scrapinghub.com', u'http://www.scrapy.org']
>>> import parsel >>> sel = parsel.Selector(html) >>> sel.xpath("//a/@href").extract() [u'http://blog.scrapinghub.com', u'https://www.scrapinghub.com', u'http://www.scrapy.org'] 

Note: Parsel works both in Python 3 and Python 2. If you’re using Python 2, remember to pass the HTML in a unicode object.

Sweet Parsel Features

One of the nicest features of Parsel is the ability to chain selectors. This allows you to chain CSS and XPath selectors however you wish, such as in this example:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
>>> sel.css('li.external').xpath('./a/@href').extract() [u'http://www.scrapy.org']
>>> sel.css('li.external').xpath('./a/@href').extract() [u'http://www.scrapy.org']
>>> sel.css('li.external').xpath('./a/@href').extract() [u'http://www.scrapy.org'] 

You can also iterate through the results of the .css() and .xpath() methods since each element will be another selector:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
>>> for li in sel.css('ul li'): ... print(li.xpath('./a/@href').extract_first()) ... http://blog.scrapinghub.com https://www.scrapinghub.com http://www.scrapy.org
>>> for li in sel.css('ul li'): ... print(li.xpath('./a/@href').extract_first()) ... http://blog.scrapinghub.com https://www.scrapinghub.com http://www.scrapy.org
>>> for li in sel.css('ul li'): ... print(li.xpath('./a/@href').extract_first()) ... http://blog.scrapinghub.com https://www.scrapinghub.com http://www.scrapy.org 

You can find more examples of this in the documentation.

When to use Parsel

The beauty of Parsel is in its wide applicability. It is useful for a range of situations including:

  • Processing XML/HTML data in an IPython notebook
  • Writing end-to-end tests for your website or app
  • Simple web scraping projects with the Python Requests library
  • Simple automation tasks at the command-line

And now, you can also run Parsel with the command-line tool for simple extraction tasks in your terminal. This new development is thanks to our very own Rolando who created parsel-cli.

Install parsel-cli with pip install parsel-cli and play around using the examples below (you need to have curl installed).

The following command will download and extract the list of Academy Award-winning films from Wikipedia:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
curl -s https://en.wikipedia.org/wiki/List_of_Academy_Award-winning_films | parsel-cli 'table.wikitable tr td i a::text'
curl -s https://en.wikipedia.org/wiki/List_of_Academy_Award-winning_films | parsel-cli 'table.wikitable tr td i a::text'
curl -s https://en.wikipedia.org/wiki/List_of_Academy_Award-winning_films | parsel-cli 'table.wikitable tr td i a::text'

You can also get the current top 5 news items from Hacker News using:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
curl -s https://news.ycombinator.com |
parsel-cli 'a.storylink::attr(href)' | head -n 5
curl -s https://news.ycombinator.com | parsel-cli 'a.storylink::attr(href)' | head -n 5
curl -s https://news.ycombinator.com |
 parsel-cli 'a.storylink::attr(href)' | head -n 5

And how about obtaining a list of the latest YouTube videos from a specific channel?

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
curl -s https://www.youtube.com/user/crashcourse/videos |
parsel-cli 'h3 a::attr(href), h3 a::text' |
paste -s -d' n' - | sed 's|^|http://youtube.com|'
curl -s https://www.youtube.com/user/crashcourse/videos | parsel-cli 'h3 a::attr(href), h3 a::text' | paste -s -d' n' - | sed 's|^|http://youtube.com|'
curl -s https://www.youtube.com/user/crashcourse/videos |
 parsel-cli 'h3 a::attr(href), h3 a::text' |
 paste -s -d' n' - | sed 's|^|http://youtube.com|'

Wrap Up

I hope that you enjoyed this little tour of Parsel and I am looking forward to seeing how these examples have sparked your imagination when finding solutions for your HTML parsing needs.

The next time you find yourself wanting to extract data from HTML/XML and don’t need Scrapy and its crawling capabilities, you know what to do: just Parsel it!

Feel free to reach out to us on Twitter and let us know how you use Parsel in your projects.

×

Try Zyte API

Zyte proxies and smart browser tech rolled into a single API.