If you are a programmer, scrapy[0] will be a good bet. It can handle robots.txt, request throttling by ip, request throttling by domain, proxies and all other common nitty-gritties of crawling. The only drawback is handling pure javascript sites. We have to manually dig into the api or add a headless browser invocation within the scrapy handler.
Scrapy also has the ability to pause and restart crawls [1], run the crawlers distributed [2] etc. It is my goto option.
Haven't tried this[0] yet, but Scrapy should be able to handle JavaScript sites with the JavaScript rendering service Splash[1]. scrapy-splash[2] is the plugin to integrate Scrapy and Splash.
I've recently made a little project with scrapy (for crawling) and BeautifulSoup (for parsing html) and it works out great. One more thing to add to the above list are pipelines, they make downloading files quite easy.
I made a little BTC price ticker on an OLED with and arduino. I used BeautifulSoup to get the data. Went from knowing nothing about web scraping to getting the thing working pretty quick. Very easy to use.
I've had mixed results with scrapy, probably more based in my inexperience than other thing, but for example retrieving a posting in idealista.com with vanilla scrapy begets an error page whereas a basic wget command retrieves the correct page.
So the learning curve for simple things makes me jump to bash scripts; scrapy might prove more valuable when your project starts to scale.
But also of course: normally the best tool is the one you already know!
Nope. It is very specifically tailored to crawling. If you just need something distributed why not check out RQ [0], Gearman [1] or Celery [2]? RQ and Celery are python specific.
I once used it to automate the, well, scraping of statistics from an affiliate network account. So you can do pretty specific stuff, as long as it involves HTTP/HTTPS requests.
Yes. It beats building up your own crawler that handles all the edge cases. That said, before you reach the limits of scrapy, you will more likely be restricted by preventive measures put in place by twitter(or any other large website) to limit any one user hogging too much resources. Services like cloudflare or similar are aware of all the usual proxy servers and such and will immediately block such requests.
One approach, that is commonly mentioned in this thread is to simulate a behavior of a normal user as much as possible. For instance rendering the full page (including JS, CSS, ...) which is far more resource intensive than just downloading the HTML page.
However if you're crawling big platforms, there are often ways in that can scale and be undetected for very long periods of time. Those include forgotten API endpoints that were build for some new application that was dismissed after a time, mobile interface that taps into different endpoints, obscure platform specific applications (e.g. playstation or some old version of android). Older and larger the platform is, the more probable is that they have many entry points they don't police at all or at least very lightly.
One of the most important rules of scrapping is to be patient. Everyone is anxious to get going as soon as they can, however once you start pounding on a website, consequently draining their resources, they will take measures against you and the whole task will get way more complicated. Would you have the patience and make sure you're staying within some limits (hard to guess from the outside), you will be eventually able to amass large datasets.
some "ethical" measures may do the trick to. scrapy has a setting to integrate delays + you can use fake headers. Some sites are pretty persistent with their cookies (include cookies in requests). It's all case by case basis
I've used it for some larger scrapes (nothing at the scale you're talking about, but still sizeable) and scrapy has very tight integration with scrapinghub.com to handle all of the deployment issues (including worker uptime, result storage, rate-limiting, etc). Not affiliated with them in any way, just have had a good experience using them in the past.
Every `hosted/cloud/saas/paas` goes into bazillions $$$ for anything largescale. Starting from aws bandwidth and including nearly every service on this earth.
I've actually wrote about this! General tips that I've found from doing more than a few projects [0], and then an overview of Python libraries I use [1].
If you don't want to clock on the links, requests and BeautifulSoup / lxml is all you need 90% of the time. Throw gevent in there and you can get a lot of scraping done in not as much time as you think it would take.
And as long as we're talking about web scraping, I'm a huge fan of it. There's so much data out there that's not easily accessible and needs to be cleaned and organized. When running a learning algorithm, for example, a very hard part that isn't talked about a lot is getting the data before throwing it in a learning function or library. Of course, there the legal side of it if companies are not happy with people being able to scrape, but that's a different topic.
I'll keep going. The best way to learn about what are the best tools is to do a project on your own and teat them all out. Then you'll know what suits you. That's absolutely the best way to learn something about programming -- doing it instead of reading about it.
LXML also is known to have memory leaks [0][1], so be careful using it in any kind of automated system that will be parsing lots of small documents. I personally encountered this issue, and actually caused to abandon a project until months later when I found the references I linked above. It works nice and fast for one-off tasks, though.
Also, a question: how often do you really encounter badly-formed markup in the wild? How hard is it really to get HTML right? It seems pretty simple, just close tags and don't embed too much crazy stuff in CDATA. Yet I often read about how HTML parsers must be "permissive" while XML parsers don't need to be. I've never had a problem parsing bad markup; usually my issues have to do with text encoding (either being mangled directly or being correctly-encoded vestiges of a prior mangling) and the other usual problems associated with text data.
lxml.etree.HTMLParser(recover=True) should work for bad HTML. A few times I had to replace characters before giving the page to lxml, but it was more of an encoding issue.
It may not be related but I also noted that processing HTML with lxml (e.g. update every URL of a HTML document with a different domain for instance) was producing malformed HTML with duplicated tags. So I would recommend to use lxml only as a data extraction tool.
BeautifulSoup. The difference is that lxml can run a little faster in certain cases for a huge scrape, but you'll very very very if ever need that. It's interesting and probably worthwhile to try both and know the difference, but bs BeautifulSoup is definitely where to start
I maintain ~30 different crawlers. Most of them are using Scrapy. Some are using PhantomJS/CasperJS but they are called from Scrapy via a simple web service.
All data (zip files, pdf, html, xml, json) we collect are stored as-is (/path/to/<dataset name>/<unique key>/<timestamp>) and processed later using a Spark pipeline. lxml.html is WAY faster than beautifulsoup and less prone to exception.
We have cronjob (cron + jenkins) that trigger dataset update and discovery. For example, we scrape corporate registry, so everyday we update the 20k oldest companies version. We also implement "discovery" logic in all of our crawlers so they can find new data (ex.: newly registered company). We use Redis to send task (update / discovery) to our crawlers.
It's a simple redis list containing JSON task. We have a custom Scrapy Spider hooked to next_request and item_scraped [1]. It check (lpop) for update/discovery tasks in the list and build a Request [2]. We only crawl max ~1 request per second, so performance is not an issue.
For every website we crawl we implement a custom discovery/update logic.
Discovery can be, for example, crawl a specific date range, seq number, postal code.... We usually seed discovery based on the actual data we have, like highest_company_number + 1000, so we get the newly registered companies.
Update is to update a single document. Like crawl document for company number 1234. We generate a Request [2] to crawl only that document.
We monitor exceptions with Sentry. We store raw data so we don't have to hurry to fix the ETL, we only have to fix navigation logic and we keep crawling.
Sorry if it's a stupid question/example/comparison, just trying to understand better:
You're storing the full html data instead of reaching into the specific div's for the data you might need? This way, separating the fetching from the parsing?
I'm a scraping rookie, and I usually fetch + parse in the same call, this might resolve some issues for me :) thanks!
When I've done scraping, I've always taken this approach also: I decouple my process into paired fetch-to-local-cache-folder and process-cached-files stages.
I find this useful for several reasons, but particularly if you want to recrawl the same site for new/updated content, or if you decide to grab extra data from the pages (or, indeed, if your original parsing goes wrong or meets pages it wasn't designed for).
Related: As well as any pages I cache, I generally also have each stage output a CSV (requested url, local file name, status, any other relevant data or metadata), which can be used to drive later stages, or may contain the final output data.
Requesting all of the pages is the biggest time sink when scraping — it's good to avoid having to do any portion of that again, if possible.
Always fascinated by how diverse the discussion and answers is for HN threads on web-scraping. Goes to show that "web-scraping" has a ton of connotations, everything from automated-fetching of URLs via wget or cURL, to data management via something like scrapy.
Scrapy is a whole framework that may be worthwhile, but if I were just starting out for a specific task, I would use:
Python 3, AFAIK, doesn't have anything as handy as Ruby/Perl's Mechanize. But using the web developer tools you can usually figure out the requests made by the browser and then use the Session object in the Requests library to deal with stateful requests:
I usually just download pages/data/files as raw files and worry about parsing/collating them later. I try to focus on the HTTP mechanics and, if needed, the HTML parsing, before worrying about data extraction.
> Python 3, AFAIK, doesn't have anything as handy as Ruby/Perl's Mechanize. But using the web developer tools you can usually figure out the requests made by the browser and then use the Session object in the Requests library to deal with stateful requests
You could also use the WebOOB (http://weboob.org) framework. It's built on requests+lxml and it provides a Browser class usable like mechanize's one (ability to access doc, select HTML forms, etc.).
It also has nice companion features like associating url patterns to some custom Page classes where you can write what data to retrieve when a page with this url pattern is browsed.
All great advice. I've written dozens of small purpose-built scrapers and I love your last point.
It's pretty much always a great idea to completely separate the parts that perform the HTTP fetches and the part that figures out what those payloads mean.
MechanicalSoup seems well updated but the last time I tried these libraries, they were either buggy (and/or I was ignorant) and I just couldn't get things to work as I was used to in Ruby and Mechanize.
> Is the modified version you use a personal version or a well-known fork?
I had a specific thing I needed to do, gumbo-parser was a good match, I poked at it a little and moved on. It started with this[1] commit, then I did some other work locally which was not pushed because google/gumbo-parser is without an owner/maintainer. There are a couple of forks, but no/little adoption it seems.
I would recommend using Headless Chrome along with a library like puppeteer[0]. You get the advantage of using a real browser with which you run pages' javascript, load custom extensions, etc.
The absolute best tool i have found for scraping is Visual Web Ripper.
It is not open source, and runs in windows only, but it is one of the easiest to use tools that i have found. I can set up scrapes entirely visually, and it handles complex cases like infinite scroll pages, highly javascript dependent pages and the like. I really wish there were an open source solution that was as good as this one.
I use it with one of my clients professionally. Their support is VERY good btw.
WebOOB [0] is a good Python framework for scraping websites. It's mostly used to aggregate data from multiple websites by organizing each site backend implement an abstract interface (for example the CapBank abstract interface for parsing banking sites) but it can be used without that part.
On the pure scraping side, it has a "declarative parsing" to avoid painful plain-old procedural code [1]. You can parse pages by simply specifying a bunch of XPaths and indicating a few filters from the library to apply on those XPath elements, for example CleanText to remove whitespace nonsense, Lower (to lower-case), Regexp, CleanDecimal (to parse as number) and a lot more. URL patterns can be associated to a Page class of such declarative parsing. If declarative becomes too verbose, it can always be replaced locally by writing a plain-old Python method.
A set of applications are provided to visualize extracted data, and other niceties are provided for debug easing.
Simply put: « Wonderful, Efficient, Beautiful, Outshining, Omnipotent, Brilliant: meet WebOOB ».
No one has mentioned it so I will: consider Lynx, the text-mode web-browser. Being command-line you can automate with Bash or even Python. I have used it quite happily to crawl largeish static sites (10,000+ web pages per site). Do a `man lynx` the options of interest are -crawl, -traversal, and -dump. Pro tip - use in conjunction with HTML TIDY prior to the parsing phase (see below).
I have also used custom written Python crawlers in a lot of cases.
The other thing I would emphasize is that a web scraper has multiple parts, such as crawling (downloading pages) and then actually parsing the page for data. The systems I've set up in the past typically are structured like this:
1. crawl - download pages to file system
2. clean then parse (extract data)
3. ingest extracted data into database
4. query - run adhoc queries on database
One of the trickiest things in my experience is managing updates. So when new articles/content are added to the site you only want to have to get and add that to your database, rather than crawl the whole site again. Also detecting updated content can be tricky. The brute force approach of course is just to crawl the whole site again and rebuild the database - not ideal though!
Of course, this all depends really on what you are trying to do!
For someone on a Javascript stack, I highly recommend combining a requester (e.g., "request" or "axios") with Cheerio, a server-side jQuery clone. Having a familiar, well-known interface for selection helps a lot.
We use this stack at WrapAPI (https://wrapapi.com), which we highly recommend as a tool to turn webpages into APIs. It doesn't completely do all the scraping (you still need to write a script), but it does make turning a HTML page into a JSON structure much easier.
I've just finished my research on web scraping for my company (took me about 7 days). I started with import.io and scrapinghub.com for point and click scraping to see if I could do it without writing codes. Ultimately, UI point and click scraping is for none-technical. There are many data you would find it hard to scrape. For example, lazada.com.my stores the product's SKU inside an attribute that looks like <div data-sku-simple="SKU11111"></div> which I couldn't get. import.io's pricing is also something. I need to pay $999 a month for accessing API data is just too high.
So I decided to use scrapy, the core of scrapinghub.com.
I haven't written much python before but scrapy was very easy to learn. I wrote 2 spiders and run on scrapinghub (their serverless cloud). Scrapinghub support jobs scheduling and many other things at a cost. I prefer scrapinghub because in my team we don't have DevOps. It also supports Crawlera to prevent IP banning, Portia for point and click (still in beta, it was still hard to use), and Splash for SPA websites but it's buggy and the github repo is not under active maintenance.
For DOM query I use BeautifulSoup4. I love it. It's jQuery for python.
For SPA websites I wrote a scrapy middleware which uses puppeteer. The puppeteer is deployed on Amazon Lambda (1m free request first 365 days, more than enough for scraping) using this https://github.com/sambaiz/puppeteer-lambda-starter-kit
I am planning to use Amazon RDS to store scraped data.
Headless Chrome, Puppeteer, NodeJS (jsdom), and MongoDB. Fantastic stack for web data mining. Async based using promises for explicit user input flow automation.
I have used it with a locally hosted extension to allow easy access to dom and JavaScript after load. Then dumped results to a node app. Was very happy with the results.
One thing I haven't worked on yet is waiting for stuff to load if that is a problem. Otherwise you try to limit hitting a site either using sleep/CRON
What's also interesting is session tokens, one site I was able to hunt down the generated token bread crumb which JS produced, but it wasn't valid. Still had to visit the site, interesting.
I use a combination of Selenium and python packages (beautifulsoup). I'm primarily interested in scraping data that is supplied via javascript, and I find Selenium to be the most reliable way scrape that info. I use BS when the scraped page has a lot of data, thereby slowing down Selenium, and I pipe the page source from Selenium, with all javascript rendered, into BS.
I use explicit waits exclusively (no direct calls like `driver.find_foo_by_bar`), and find it vastly improves selenium reliability. (Shameless plug) I have a python package, Explicit[1], that makes it easier to use explicit waits.
>I'm primarily interested in scraping data that is supplied via javascript, and I find Selenium to be the most reliable way scrape that info.
Have you found that you aren't able to find accessible APIs to request against? Have you ever tried to contact the administrators to see if there's an API you could access? Are you scraping data that would be against ToS if you tried to get it in a way that would benefit both you and the target web site?
>Have you found that you aren't able to find accessible APIs to request against?
I'm scraping from variety of different websites (1000+) that my org doesn't own. Reconfiguring to hit APIs would be complex, and a maintenance problem, both of which I easily avoid by using selenium to drive an actual browser, at the expense of time.
>Have you ever tried to contact the administrators to see if there's an API you could access?
Just not feasible given the scope and breadth of the scraping.
>Are you scraping data that would be against ToS if you tried to get it in a way that would benefit both you and the target web site?
For non-coders, import.io is great. However, they used to have a generous free plan that has since went away (you are limited to 500 records now). Still a great product, problem is they don't have a small plan (starts at $299/month and goes up to $9,999).
I was looking at services in this area a few weeks ago to automate a small need I had and ran across these guys. They offer a free 5,000 monthly request basic plan. I gave it a try, worked fine (I ended up building my own solution for greater control). It's just for scraping open graph (with some fall-back capability) tags though.
I use Grepsr. Really recommend, they have a Chrome extension that works like Kimono. Really easy for non technical people. If you have someone in Marketing or whatever that needs some data, maybe the only thing that they need to know is to use CSS Selectors and so on.
There are a great many sites that degrade gracefully when JS support is not available. It makes absolutely no sense to waste the resources required to run a full headless browser when simple HTTP requests will retrieve the same information faster, more efficiently, and in a way that's easier to parallelize.
I haven't dug deep recently, but if you need to automate browser download dialog this wasn't possible with Headless Chrome. (I'd love to find out that this has changed, and you can control it as well as you can with Selenium)
For most things, I use Node.js with the Cheerio library, which is basically a stripped-down version of jQuery without the need for a browser environment. I find using the jQuery API far more desirable than the clunky, hideous Beautiful Soup or Nokogiri APIs.
For something that requires an actual DOM or code execution, PhantomJS with Horseman works well, though everyone is talking about headless Chrome these days so IDK. I've not had nearly as many bad experiences with PhantomJS as others have purportedly experienced.
I have been playing around with Cheerio for a short while and it is quite cool! Although extracting comments wasn't as straightforward as I thought it would be.
Do you have any experience with processing and scraping large files using Cheerio? It doesn't support streaming does it? I am currently faced with processing a ~75 MB XML and I am not sure if Cheerio is suited for that.
I remember trying to use mechanize as a beginning rubyist and I can't recommend it from that experience. Specifically I remember poor documentation and confusing layers of abstraction. It might be better now that I know what the DOM is and how jQuery selectors work, but my first impression was abysmal.
I maintain about 8 crawlers and I use only vanilla Python
I have a function to help me search :
def find_r(value, ind, array,stop_word):
indice = ind
for i in array:
indice = value.find(i,indice)+1
end = value.find(stop_word,indice)
return value[indice: end], end
If you can get away without a JS environment, do so. Something like scrapy will be much easier than a full browser environment. If you cannot, don’t bother going halfway and just go straight for headless chrome or Firefox. Unfortunately Selenium seems to be past its useful life as Firefox dropped support and chrome has a chrome driver which wraps around it. Phantom.js is woefully out of date and since it’s a different environment than your target site was designed for just leads to problems.
I manage the WebDriver work at Mozilla making Firefox work with Selenium. I can categorically State we haven’t killed Selenium. We, over the last few years, have invested more in Selenium than other browsers.
Selenium IDE no longer works in Firefox for a number of reasons;
1) Selenium IDE didn’t have a maintainer
2) Selenium IDE is a Firefox add on and Mozilla changed how adding worked. They did this for numerous security reasons.
My apologies, I was mistaken, but I can't edit my post now. It looks like the selenium code has moved into something called geckodriver, which I suppose is a wrapper around the underlying Marionette protocol.
Firefox did not drop support for Selenium. Selenium IDE, a record/playback test creation tool, stopped working in newer versions of Firefox, but a) Selenium IDE is only one part of the Selenium project, and b) The Selenium team is working on a new version of IDE compatible with the new Firefox add-on APIs.
I've done this professionally in an infrastructure processing several terabytes per day. A robust, scalable scraping system comprises several distinct parts:
1. A crawler, for retrieving resources over HTTP, HTTPS and sometimes other protocols a bit higher or lower on the network stack. This handles data ingestion. It will need to be sophisticated these days - sometimes you'll need to emulate a browser environment, sometimes you'll need to perform a JavaScript proof of work, and sometimes you can just do regular curl commands the old fashioned way.
2. A parser, for correctly extracting specific data from JSON, PDF, HTML, JS, XML (and other) formatted resources. This handles data processing. Naturally you'll want to parse JSON wherever you can, because parsing HTML and JS is a pain. But sometimes you'll need to parse images, or outdated protocols like SOAP.
3. A RDBMS, with databases for both the raw and normalized data, and columns that provide some sort of versioning to the data in a particular point in time. This is quite important, because if you collect the raw data and store it, you can re-parse it in perpetuity instead of needing to retrieve it again. This will happen somewhat frequently if you come across new data while scraping that you didn't realize you'd need or could use. Furthermore, if you're updating the data on a regular cadence, you'll need to maintain some sort of "retrieved_at", "updated_at" awareness in your normalized database. MySQL or PostgreSQL are both fine.
4. A server and event management system, like Redis. This is how you'll allocate scraping jobs across available workers and handle outgoing queuing for resources. You want a centralized terminal for viewing and managing a) the number of outstanding jobs and their resource allocations, b) the ongoing progress of each queue, c) problems or blockers for each queue.
5. A scheduling system, assuming your data is updated in batches. Cron is fine.
6. Reverse engineering tools, so you can find mobile APIs and scrape from them instead of using web targets. This is important because mobile API endpoints a) change far less frequently than web endpoints, and b) are far more likely to be JSON formatted, instead of HTML or JS, because the user interface code is offloaded to the mobile client (iOS or Android app). The mobile APIs will be private, so you'll typically have to reverse engineer the HMAC request signing algorithm, but that is virtually always trivial, with the exception of companies that really put effort into obfuscating the code. apktool, jadx and dex2jar are typically sufficient for this if you're working with an Android device.
7. A proxy infrastructure, this way you're not constantly pinging a website from the same IP address. Even if you're being fairly innocuous with your scraping, you probably want this, because many websites have been burned by excessive spam and will conscientiously and automatically ban any IP address that issues something nominally more than a regular user, regardless of volume. Your proxies come in several flavors: datacenter, residential and private. Datacenter proxies are the first to be banned, but they're cheapest. These are proxies resold from datacenter IP ranges. Residential IP addresses are IP addresses that are not associated with spam activity and which come from ISP IP ranges, like Verison Fios. Private IP addresses are IP addresses that have not been used for spam activity before and which are reserved for use by only your account. Naturally this is in order from lower to greater expense; it's also in order from most likely to least likely to be banned by a scraping target. NinjaProxies, StormProxies, Microleaf, etc are all good options. Avoid Luminati, which offers residential IP addresses contributed by users who don't realize their IP addresses are being leased through the use of Hola VPN.
Each website you intend to scrape is given a queue. Each queue is assigned a specific allotment of workers for processing scraping jobs in that queue. You'll write a bunch of crawling, parsing and database querying code in an "engine" class to manage the bulk of the work. Each scraping target will then have its own file which inherits functionality from the core class, with the specific crawling and parsing requirements in that file. For example, implementations of the POST requests, user agent requirements, which type of parsing code needs to be called, which database to write to and read from, which proxies should be used, asynchronous and concurrency settings, etc should all be in here.
Once triggered in a job, the individual scraping functions will call to the core functionality, which will build the requests and hand them off to one of a few possible functions. If your code is scraping a target that has sophisticated requirements, like a JavaScript proof of work system or browser emulation, it will be handed off to functionality that implements those requirements. Most of the time, this won't be needed and you can just make your requests look as human as possible - then it will be handed off to what is basically a curl script.
Each request to the endpoint is a job, and the queue will manage them as such: the request is first sent to the appropriate proxy vendor via the proxy's API, then the response is sent back through the proxy. The raw response data is stored in the raw database, then normalized data is processed out of the raw data and inserted into the normalized database, with corresponding timestamps. Then a new job is sent to a free worker. Updates to the normalized data will be handled by something like cron, where each queue is triggered at a specific time on a specific cadence.
You'll want to optimize your workflow to use endpoints which change infrequently and which use lighter resources. If you are sending millions of requests, loading the same boilerplate HTML or JS data is a waste. JSON resources are preferable, which is why you should invest some amount of time before choosing your endpoint into seeing if you can identify a usable mobile endpoint. For the most part, your custom code is going to be in middleware and the parsing particularities of each target; BeautifulSoup, QueryPath, Headless Chrome and JSDOM will take you 80% of the way in terms of pure functionality.
> 3. A RDBMS, with databases for both the raw and normalized data
I've found the filesystem (local or network, depending on scale) works well for the raw data. A normalized file name with a timestamp and job identifier in a hashed directory structure of some sort (I generally use $jobtype/%Y-%m-%d/%H/ as a start) works well, and reading and writing gzip is trivial (and often you can just output the raw content of gzip encoded payloads). The filesystem is an often overlooked database. If you end up needing more transactional support, or to easily identify what's been processed or not, look at how Maildir works.
After normalization, the database is ideal though.
That said, I was doing a few gigabytes a day, not a dew terabytes, so you might have run into some scale issues I didn't. I was able to keep it to mostly one box for crawling and parsing, but crawlers ended up being complex and job-queue driven enough that expanding to multiple systems wouldn't have been all that much extra work (an assessment I feel confident in, having done similar things before).
2. Extract the text content from the text nodes and ignore nodes that contain only white space:
let text = document.getNodesByType(3), a = 0, b = text.length, output = []; do { if ((/^(\s+)$/).test(text[a].textContent) === false) { output.push(text[a].textContent); } a = a + 1; } while (a < b); output;
That will gather ALL text from the page. Since you are working from the DOM directly you can filter your results by various contextual and stylistic factors. Since this code is small and executes stupid fast it can be executed by bots easily.
You could write a crawler in any language. Crawling is easy as you are listening for HTTP traffic and analyzing the HTML in the response.
To accurately get the content in dynamically executed pages you need to interact with the DOM. This is the reason Google updated its crawler to execute JavaScript.
Yes. The crawler can be written in nearly any language. The actual scraper probably has to be written in JavaScript in order to access and interact with the DOM as the user would and thereby gain access to content that is not present by default.
For very simple tasks Listly seems to be a fast and good solution: http://www.listly.io/
If you need more power, I heard good stuff about http://80legs.com/ though never tried them myself.
If you really need to do crazy shit like crawling the iOS App Store really fast and keep thing up to date. I suggest using Amazon Lambda and a custom Python parser. Though Lambda is not meant for this kind of things it works really well and is super scalable at a reasonable price.
Phantom is woefully out of date, you need a polyfill even for Function.bind. Firefox dropped support for Selenium in 47, and chromedriver only supports it with a wrapper called chromedriver.
Are you talking about Selenium WebDriver or Selenium IDE (the record/playback tool for Firefox)? Those are two separate things. Selenium WebDriver implements is a cross-browser W3C-standard and Firefox very much still supports it.
We have been using kapow robosuite for close to 10 years now. Its a commercial GUI based tool which have worked well for us, it saves us a lot of maintenance time compared to our previous hand-rolled code extraction pipeline. Only problem is that its very expensive(pricing seems catered towards very large enterprises).
So I was really hoping this this thread would have revealed some newer commercial GUI-based alternatives(on-premise, not SaaS). Because I dont really ever want to go back the maintenance hell of hand rolled robots ever again :)
for mostly static pages requests/pycurl + beautifulsoup more than sufficient. For advance scraping, take a look at scrapy.
for javascript heavy pages most people rely on selenium webdriver. However you can also try hlspy (https://github.com/kanishka-linux/hlspy), which is a little utility I made a while ago for dealing with javascript heavy pages for simple usage.
One of the important avenues to scrape AJAX heavy and phantomjs avoiding websites is using the google chrome extension support. They can mirror the dom and send it to an external server for processing where we can use python lxml to xpath to appropriate nodes. This worked for me to scrape Google, before we hit the capatcha. If anyone is interested, i can share code i wrote to scrape websites !
If you can scrape findthecompany database ? I have done it successfully !!
> This worked for me to scrape Google, before we hit the capatcha.
If Google wanted to give back something to the community, it would offer cheap automated searches (current prices are absurd). Another thing - more depth after the first 1000 results. Sometimes you want to know the next result. We shouldn't need to do all these stupid things to batch query a search engine, it should be open. That makes it all the more important to invent an open-source, federated search engine, so we can query to our heart's content (and have privacy).
As for 'federated search engine' - it's not 'federated' per se but check out Gigablast search engine. Open source (source on GitHub) and a TOTALLY AWESOME piece of software written by one guy. You can do good searches at the Gigablast site[1], or set up your own search engine. Gigablast also offers an API (I may be wrong but I think DuckDuckGo uses that API for some tasks).
If you need to scrape content from complex JS apps (eg. React) where it doesn't pay to reverse engineer their backend API (or worse, it's encrypted/obfuscated) you may want to look at CasperJS.
It's a very easy to use frontend to PhantomJS. You can code your interactions in JS or CoffeeScript and scrape virtually anything with a few lines of code.
If you need crawling, just pair a CasperJS script with any spider library like the ones mentioned around here.
Depends on your skillset and the data you want to scrape. I am testing waters for a new business that relies on scraped data. As a non programmer I had good success testing stuff with contentgrabber. Import.io also get mentioned a lot. Tried out octoparse but wast stable with the scraping.
Agenty is cloud-hosted web scraping app and you can setup scraping agents using their point and click CSS Selector Chrome extension to extract anything from HTML with these 3 modes below:
- TEXT : Simple clean text
- HTML : Outer or Inner HTML
- ATTR : Any attribute of a html tag like image src, hyperlink href…
Or advance mode like REGEX, XPATH etc.
And then save the scraping agent to execute on cloud-hosted app with most advanced features like batch crawling, scheduling, multiple website scraping simultaneously without worrying in ip-address block or speed like never before.
If you need to interpret javascript, or otherwise simulate regular browsing as closely as possible, you may consider running a browser inside a container and controlling it with selenium. I have found it’s necessary to run inside the container if you do not have a desktop environment. This is better suited for specific use cases rather than mass collection because it is slower to run a full browsing stack than to only operate at the HTTP layer. I have found that alternatives like phantomJS are hard to debug. Consider opening VNC on the container for debugging. Containers like this that I know of are SeleniumHQ and elgalu/selenium.
Second this. My go-to for years now. Inexpensive for what it does. Factor in the cost of building out it's features in your home rolled solution, and you'll be saving a ton. Plus the team is very responsive if you need support. And is open to small consulting projects if you need something beyond your own abilities.
I used to use a combo of python tools. Requests, beautifulsoup mostly. However the last few things I've built used selenium to drive headless chrome browsers. This allows me to run the javascript most sites use these days.
Apify (https://www.apify.com) is a web scraping and automation platform where you can extract data from any website using a few simple lines of JavaScript. It's using headless browsers, so that people can extract data from pages that have complex structure, dynamic content or employ pagination.
Recently the platform added support for headless Chrome and Puppeteer, you can even run jobs written in Scrapy or any other library as long as it can be packaged as Docker container.
I agree with others, with curl and the likes you will hit insurmountable roadblocks sooner or later. It's better to go full headless browser from the start.
I use a python->selenium->chrome stack. The Page Object Model [0] has been a revelation for me. My scripts went from being a mess of spaghetti code to something that's a pleasure to write and maintain.
Whatever you end up using for scraping, I beg you to pick a unique user-agent which allows a webmaster to understand which crawler is it, to better allow it to pass through (or be banned, depending).
Don't stick with the default "scrapy" or "Ruby" or "Jakarta Commons-HttpClient/...", which end up (justly) being banned more easily than unique ones, like "ABC/2.0 - https://example.com/crawler" or the like.
Note that for some libraries, the agent is set to empty or whatever the default is for the tool (e.g. `curl/7.43.0` for curl). It's always worth setting it to something.
As a frequent scraper of government sites, and sometimes commercial sites for research purposes, I avoid as much as possible as faking a User Agent, i.e. copying the default strings for popular browsers:
`Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36`
Almost always, if a site rejects my scraper on the basis of agent, they're doing a regex for "curl", "wget" or for an empty string. Setting a user-agent to something unique and explicit, i.e. "Dan's program by danso@myemail.com" works fine without feeling shady.
Maybe for old government sites that break on anything but IE, you'll have to pretend to be IE, but that's very rare.
We had a really tough time scraping dynamic web content using scrapy, and both scrapy and selenium require you to write a program (and maintain it) for every separate website that you have to scrape. If the website's structure changes you need to debug your scraper. Not fun if you need to manage more than 5 scrapers.
It was so hard that we made our own company JUST to scrape stuff easily without requiring programming. Take a look at https://www.parsehub.com
I use Node and either puppeteer[0] or plain Curl[1]. IMO Curl is years ahead of any Node.js request lib. For proxies I use (shameless plug!) https://gimmeproxy.com .
I made this https://www.drupal.org/project/example_web_scraper and produced the underlying code many years ago. The idea is to map xpath queries to your data model and use some reusable infrastructure to simply apply it. It was very good, imho (for what it was). (I'm writing this comment since I don't see any other comments with the words map or model :/ )
I am really surprised nobody mentioned pyspider. It is simple, has a web dashboard and can handle JS pages. It can store data to a database of your choice. It can handle scheduling, recrawling. I have used it to crawl Google Play. 5$ Digital Ocean VPS with pyspider installed on it could handle millions of pages crawled, processed and saved to a database.
https://github.com/featurist/coypu is nice for browser automation. A related question: what are good tools for database scraping, meaning replicating a backend database via a web interface (not referring to compromising the application, rather using allowed queries to fully extract the database).
For a little diversity on tools, if you're looking for something quick that others can access the data easily - Google Apps script in a Google Sheet can be quite useful.
One of the challenges with modern day scraping is you need to account for client-side JS rendering.
If you prefer an API as a service that can pre-render pages, I built Page.REST (https://www.page.rest). It allows you to get rendered page content via CSS selectors as a JSON response.
The best tool for web scraping, for me, is something easy to deploy and redeploy; and something that doesn't rely on three working programs--eliminating selenium sounds great.
I just tried puppeteer yesterday for the first time. It seems to work very well. My only complaint is that it is very new and does now have a plethora of examples.
I previously have used WWW::Mechanize in the Perl world, but single page applications with Javascript really require something with a browser engine.
I used CasperJS[0] in the past to scrap a javascript heavy forum (ProBoards) and it worked well. But that was a few years ago, I have no idea what new strategies came up in the meantime.
Been getting blocked by recaptcha more and more, do any of these tools handle dealing with that or workarounds by default? Tried routing through proxies and swapping IP addresses, slowing down, etc... Any specific ways people get around that?
I’ve been using puppeteer to scrape and it’s been fantastic. Since it’s a headless browser, it can handle SPA just as well as server side loaded traditional websites. It’s also incredibly easy to use with async/await.
If you need simple scraping, I like traditional http request lib. For more robust scraping (ie clicking buttons / filling text), use capybara and either phantomjs or chromedriver - easy to install using homebrew!
A ton of people recommended Scrapy - and I am always looking for senior Scrapy resources that have experience scraping at scale. Please feel free to reach out - contact info is in my profile.
We're about to announce a new Python scraping toolkit, memorious: https://github.com/alephdata/memorious - it's a pretty lightweight toolkit, using YAML config files to glue together pre-built and custom-made components into flexible and distributed pipelines. A simple web UI helps track errors and execution can be scheduled via celery.
We looked at scrapy, but it just seemed like the wrong type of framing for the type of scrapers we build: requests, some html/xml parser, and output into a service API or a SQL store.
Also good is RoboBrowser which combines beautifulsoup with Requests to get a nice 'Browser' abstraction. It also has good built-in functionality for filling in forms.
Any details on this anywhere, or is it not for public consumption? I'm just getting started in Python and want to do something with Gumtree and eBay as an idea to help me in a different sphere.
It's not really for public consumption because it's embarrassingly badly written :)
It's pretty dumb really. Just figured out the search URLs and then parse the list responses. It then stores the auctions/ad IDs it has seen in a tiny redis instance with 60 days' expiry on each ID it inserts. If there are any items it hasn't seen each time it runs, it compiles them in a list and emails them to me via AWS SNS. Runs every 5 minutes from cron on a Raspberry Pi Zero plugged into the back of my XBox 360 as a power supply and my router via a USB/ethernet cable.
The main bulk of the work went into the searches to run which are a huge list of typos on things with a high return. I tend to buy, test, then reship them for profit. Not much investment gives a very good return - pays for the food bill every month :)
Thanks for the info - I'm sure mine will be of lower quality when I do write it - hoping to compile real-world info on sold vehicles by scraping info from eBay and Gumtree, but that will take time and more skills than I currently possess. Good to hear someone's made something out of a similar idea, though.
it's getting a little long in the tooth, but I will be updating it soon to use a Chrome based renderer. If you have any suggestions, you can leave it here or PM me :)
So in general what do most people use web scraping for? Is it building up their on database of things not available via an API or something? It always sounds interesting, but the need for it is what confuses me.
I've generally used it to sort data in some way that's not available on the original webpage. Either into a csv file, making large lists easier to view, or to determine some optimum, such as the best price.
- Search a job website for a search term and list of locations, collecting each job title, company, location, and link, to view as one large spreadsheet, instead of having to navigate through 10 results per page.
- Collect cost of living indices in a list of cities
That really depends on your project and tech stack. If you're into Python and are going to deal with relatively static HTML, then the Python modules Scrapy [1], BeautifulSoup [2] and the whole Python data crunching ecosystem are at your disposal. There's lots of great posts about getting such a stack off the ground and using it in the wild [3]. It can get you pretty darn far, the architecture is solid and there are lots of services and plugins which probably do everything you need.
Here's where I hit the limit with that setup: dynamic websites. If you're looking at something like discourse-powered communities or similar, and don't feel a bit too lazy to dig into all the ways requests are expected to look, it's no fun anymore. Luckily, there's lots of js-goodness which can handle dynamic website, inject your javascript for convenience and more [4].
The recently published Headless Chrome [5] and puppeteer [6] (a Node API for it), are really promising for many kinds of tasks - scraping among them. You can get a first impression in this article [7]. The ecosystem does not seem to be as mature yet, but I think this will be foundation of the next go-to scraping tech stack.
If you want to try it yourself, I've written a brief intro [8] and published a simple dockerized development environment [9], so you can give it a go without cluttering your machine or find out what dependencies you need and how the libraries are called.
hey I'm working on this thing called BAML (browser automation markup language) and it looks something like this:
OPEN http://asdf.com
CRAWL a
EXTRACT {'title': '.title'}
It's meant to be super simple and built from ground up to support crawling Single Page Applications.
Also, creating a terminal client (early ver: https://imgur.com/a/RYx5g) for it which will launch a Chrome browser and scrape everything. http://export.sh is still very early in the works, I'd appreciate any feedback (email in profile, contact form doesn't work).
Proxycrawl seemed interesting, so I just tried it out. It appears to have problems with redirects, which is something I expect they would have figured out.
I’m crawling around 80-120M per month and the price for me fits my needs. But I suggest that you contact them if you have special needs or requirements.
Also you have to consider the amount of work, time and money that you will save by not maintaining your own system to avoid blocks and bans from the websites you are trying to crawl. With them you just call an API endpoint and you don't have to care about all that
I've been using it for around 3-4 months with different sites. For linkedin it's been a bit more than 2 months. They are a good startup and they've been improving a lot their services. They only count successful requests so you don't have to worry about fails.
If you get a bigger package they will raise your limits I guess. But I suggest that you contact them directly
Scrapy also has the ability to pause and restart crawls [1], run the crawlers distributed [2] etc. It is my goto option.
[0] https://scrapy.org/
[1] https://doc.scrapy.org/en/latest/topics/jobs.html
[2] https://github.com/rmax/scrapy-redis