- 8.5 hours on-demand video
- 20 articles
- 33 downloadable resources
- Full lifetime access
- Access on mobile and TV
- Certificate of Completion
Get your team access to Udemy's top 3,000+ courses anytime, anywhere.Try Udemy for Business
- Creating a web crawler in Scrapy
Crawling a single or multiple websites and scrape data
Deploying & Scheduling Spiders to ScrapingHub
- Logging into Websites with Scrapy
- Running Scrapy as a Standalone Script
- Building Scrapy Advanced Spider
- More functions that Scrapy offers after Spider is Done with Scraping
- Editing and Using Scrapy Parameters
- Exporting data extracted by Scrapy into CSV, Excel, XML, or JSON files
- Storing data extracted by Scrapy into MySQL and MongoDB databases
- Several real-life web scraping projects, including Craigslist, LinkedIn and many others
- Python source code for all exercises in this Scrapy tutorial can be downloaded
- Q&A board to send your questions and get them answered quickly
What Scrapy is, Scrapy vs. other Python-based scraping tools such as BeautifulSoup and Selenium, when you should use Scrapy and when it makes sense to use other tools, pros and cons of Scrapy.
Scrapy, overall, is a web crawling framework written in Python. One of its main advantages is that it's built on top of Twisted, an asynchronous networking framework, which in other words means that it's: a) really efficient and b) Scrapy is an asynchronous framework. So, to illustrate why this is a great feature for those of you that don't know what is an asynchronous scraping framework means, let's use some enlightening example. So, imagine you have to call hundred different people by phone numbers. Well, normally you'd do it by sitting down and then dialing the first number, and patiently waiting for the response on the other end. In an asynchronous world, you can pretty much dial 20 or 50 phone numbers at the same time, and then only process those calls once the other person on the other end picks up the phone. Hopefully, now it makes sense.
Scrapy is supported under Python 2.7 and Python 3.3. So depending on your version of Python, you are pretty much good to go. It is important to note that Python 2.6 support was dropped starting at Scrapy 0.20, and Python 3 support was added in Scrapy 1.1.
Scrapy, in some ways, is similar to Django. So if you use or have previously used Django, you will definitely benefit.
Now let's talk more about other Python-based Web Scraping Tools. There are old-specialized libraries, with very focused functionality and they are not really complete web scraping solutions like Scrapy is. The first two, urllib2, and then Requests are modules for reading or opening web pages, so HTTP modules. The other two are Beautiful Soup and then lxml, aka, the fun part of the scraping jobs, or really for extracting data points from those pages that logged with urllib2 and then Requests.
First, urllib2's biggest advantage is that it is included in the Python standard library, so as long as you have Python installed, you are good to go. In the past, urllib2 was more popular but since then another tool replaced it, which is called Requests. The documentation of Requests are superb. I think it's even the most popular module for Python, period. And if you haven't already, just give the docs a read. Unfortunately, Requests doesn't come pre-installed with Python, so you'll have to install it. I personally use it for quick and dirty scraping jobs. Both urllib2 and Requests support Python 2 and Python 3.
The next tool is called Beautiful Soup and once again, it's used for extracting data points from the pages that are logged. Beautiful Soup is quite robust and it handles nicely malformed markup. In other words if you have a page that is not getting validated as a proper HTML, but you know for a fact that it's a page and that it's HTML specifically page, then you should give it a try, scraping data from it with Beautiful Soup. Actually, the name came from the expression 'tag soup' which is used to describe a really invalid markup. Beautiful Soup creates a parse tree that can be used to extract data from HTML. The official docs are comprehensive and easy to read and with lots of examples. So Beautiful Soup, just like Requests, is really, beginner-friendly, and just like the other tools for scraping, Beautiful Soup also supports Python 2 and Python 3.
lxml just similar to the Beautiful Soup as it's used for scraping data. It's the most feature-rich Python library for processing both XML and HTML. It's also really fast and memory efficient. A fun fact is that Scrapy selectors are built over lxml and for example, Beautiful Soup also supports it as a parser. Just like with the Requests, I personally use lxml in pair with Requests for quick and dirty jobs. Bear in mind that the official documentation is not that beginner friendly to be honest. And so if you haven't already used a similar tool in the past, use examples from blogs or other sites; it'll probably make a bit more sense than the official way of reading.
So back to the Scrapy main pros, and when using Scrapy, of course, first and foremost it's asynchronous; furthermore, if you are building something robust and want to make it as efficient as possible with lots of flexibility and lots of options, then you should definitely use Scrapy.
One case example when using some other tools, like the previously mentioned tools makes sense is if you had a project where you need to load Home Page, or let's say, a restaurant website, and check if they are having your favorite dish on the menu, then for this type of cases, you should not use Scrapy because, to be honest, it would be overkill. Some of the drawbacks of Scrapy is that, since it's really a full fledged framework, it's not that beginner friendly, and the learning curve is a little steeper than some other tools. Also installing Scrapy is a tricky process, especially with Windows. But bear in mind that you have a lot of resources online for this, which means that you have -I'm not even kidding- probably thousand blog posts about installing Scrapy on your specific operating system.
In this Scrapy tutorial, we are going to cover deploying spider code to ScrapingHub. What is it? scrapinghub.com is a cloud-based web crawling platform, where we can send our spider code and run it from there.
Scrapinghub is an advanced platform for deploying and running web crawlers (also known as spiders or scrapers). It allows you to build crawlers easily, deploy them instantly and scale them on demand, without having to manage servers, backups or cron jobs. Everything is stored in a highly available database and retrievable using an API.
At Scrapinghub provides users with a variety of web crawling and data processing services. Its APIs allow users to schedule scraping jobs, retrieve scraped items, retrieve the log for a job, retrieve information about spiders.
Scrapinghub, Register for FREE or Sign in with Google or Github. On the overview page, we can create our projects. Name your project, and we built the tool with Scrapy, we select that and click Create. And finally we can deploy our spider; you get the instructions on how to actually do this. The tool that is going to be needed is called Scrapinghub command line client, and it can be installed with just typing: pip install shub in the Terminal. So that is going to be a no-brainer really, and it's going to be extremely easy.
Make sure you are in the Scrapy spider folder, and then type shub deploy followed by the project ID. In a few seconds, we will get the status, and once it is okay, the page "Codes and Deploys" at Scrapinghub will be changed. On the Scrapinghub Dashboard, there is a Run button to run our Scrapy spider. Once the scraping job finishes, we can Export the data into CSV, JSON, or XML and download the file.
One of the important features of Scrapinghub is that you can run "Periodic Jobs". You can select a Scrapy spider and priority, and running day and hour. So for example, if you want to run this spider code each day at around 12 o'clock, so you would just select here 12 o'clock, and then click Save. At the Dashboard, you will see the "Next Jobs" and then at around 12 or so o'clock, it will be running and after 30 or so seconds for example, it will go to the Completed Jobs.
Other scraping help tools that Scrapinghub offers is a partially free service used for visual web scraping which is a perfect solution when you are scraping a website that throws captcha. So this is a tool to integrate your already existing spider codes with pool of different IPs and once that IP is getting banned or throwing captcha, it will move to the next IP.
- Python Level: Intermediate. This Scrapy tutorial assumes that you already know the basics of writing simple Python programs and that you are generally familiar with Python's core features (data structures, file handling, functions, classes, modules, common library modules, etc.).
- Python 2.7+ or Python 3.3+
- If you do not know what Scrapy is or why you should use it, please read the course description and watch the preview lectures BEFORE joining the course.
Scrapy is a free and open source web crawling framework, written in Python. Scrapy is useful for web scraping and extracting structured data which can be used for a wide range of useful applications, like data mining, information processing or historical archival. Python Scrapy tutorial covers the fundamental of Scrapy.
Web scraping is a technique for gathering data or information on web pages. You could revisit your favorite web site every time it updates for new information. Or you could write a web scraper to have it do it for you!
Web crawling is usually the very first step of data research. Whether you are looking to obtain data from a website, track changes on the internet, or use a website API, web crawlers are a great way to get the data you need.
A web crawler, also known as web spider, is an application able to scan the World Wide Web and extract information in an automatic manner. While they have many components, web crawlers fundamentally use a simple process: download the raw data, process and extract it, and, if desired, store the data in a file or database. There are many ways to do this, and many languages you can build your web crawler or spider in.
Before Scrapy, developers have relied upon various software packages for this job using Python such as urllib2 and BeautifulSoup which are widely used. Scrapy is a new Python package that aims at easy, fast, and automated web crawling, which recently gained much popularity.
Scrapy is now widely requested by many employers, for both freelancing and in-house jobs, and that was one important reason for creating this Python Scrapy course, and that was one important reason for creating this Python Scrapy tutorial to help you enhance your skills and earn more income.
In this Scrapy tutorial, you will learn how to install Scrapy. You will also build a basic and advanced spider, and finally learn more about Scrapy architecture. Then you are going to learn about deploying spiders, logging into the websites with Scrapy. We will build a generic web crawler with Scrapy, and we will also integrate Selenium to work with Scrapy to iterate our pages. We will build an advanced spider with option to iterate our pages with Scrapy, and we will close it out using Close function with Scrapy, and then discuss Scrapy arguments. Finally, in this course, you will learn how to save the output to databases, MySQL and MongoDB. There is a dedicated section for diverse web scraping solved exercises... and updating.
One of the main advantages of Scrapy is that it is built on top of Twisted, an asynchronous networking framework. "Asynchronous" means that you do not have to wait for a request to finish before making another one; you can even achieve that with a high level of performance. Being implemented using a non-blocking (aka asynchronous) code for concurrency, Scrapy is really efficient.
It is worth noting that Scrapy tries not only to solve the content extraction (called scraping), but also the navigation to the relevant pages for the extraction (called crawling). To achieve that, a core concept in the framework is the Spider -- in practice, a Python object with a few special features, for which you write the code and the framework is responsible for triggering it.
Scrapy provides many of the functions required for downloading websites and other content on the internet, making the development process quicker and less programming-intensive. This Python Scrapy tutorial will teach you how to use Scrapy to build web crawlers and web spiders.
Scrapy is the most popular tool for web scraping and crawling written in Python. It is simple and powerful, with lots of features and possible extensions.
Python Scrapy Tutorial Topics:
This Scrapy course starts by covering the fundamentals of using Scrapy, and then concentrate on Scrapy advanced features of creating and automating web crawlers. The main topics of this Python Scrapy tutorial are as follows:
What Scrapy is, the differences between Scrapy and other Python-based web scraping libraries such as BeautifulSoup, LXML, Requests, and Selenium, and when it is better to use Scrapy.
This tutorial starts by how to create a Scrapy project and and then build a basic Spider to scrape data from a website.
Exploring XPath commands and how to use it with Scrapy to extract data.
Building a more advanced Scrapy spider to iterate multiple pages of a website and scrape data from each page.
Scrapy Architecture: the overall layout of a Scrapy project; what each field represents and how you can use them in your spider code.
Web Scraping best practices to avoid getting banned by the websites you are scraping.
In this Scrapy tutorial, you will also learn how to deploy a Scrapy web crawler to the Scrapy Cloud platform easily. Scrapy Cloud is a platform from Scrapinghub to run, automate, and manage your web crawlers in the cloud, without the need to set up your own servers.
This Scrapy tutorial also covers how to use Scrapy for web scraping authenticated (logged in) user sessions, i.e. on websites that require a username and password before displaying data.
This course concentrates mainly on how to create an advanced web crawler with Scrapy. We will cover using Scrapy CrawlSpider which is the most commonly used spider for crawling regular websites, as it provides a convenient mechanism for following links by defining a set of rules. We will also use Link Extractor object which defines how links will be extracted from each crawled page; it allows us to grab all the links on a page, no matter how many of them there are.
We will also discuss more functions that Scrapy offers after the spider is done with web scraping, and how to edit and use Scrapy parameters.
As the main purpose of web scraping is to extract data, you will learn how to write the output to CSV, JSON, and XML files.
Finally, you will learn how to store the data extracted by Scrapy into MySQL and MongoDB databases.
- This Scrapy tutorial is meant for those who are familiar with Python and want to learn how to create an efficient web crawler and scraper to navigate through websites and scrape content from pages that contain useful information.