Web Scraping Tools Chrome



Javascript has become one of the most popular and widely used languages due to the massive improvements it has seen and the introduction of the runtime known as NodeJS. Whether it's a web or mobile application, Javascript now has the right tools. This article will explain how the vibrant ecosystem of NodeJS allows you to efficiently scrape the web to meet most of your requirements.

With the growing importance of web mining, the web mining tools have also rapidly come up. There are several tools and software available to work out the business insights and intelligence. Don’t get surprised if you come across even free open source web mining tools like Bixo with which you can carry out link analysis. You can Buy Me a Coffee Now: this video, you will learn how to extract multiple web pages by using google chorme. Start scraping in a few minutes, using our tutorials. There are simple and designed to give you just the right amount of knowledge to get started. Web scraper chrome extension is one of the most powerful tools for extracting web data. Using the extension, you can devise a plan or sitemap regarding how a particular web site of your choice should be navigated. Web scraper chrome extension will, then, follow the navigation design accordingly and scrape the. What is Web Scraping? Web scraping is an automated method used to extract large amounts of data from websites. The data on the websites are unstructured. Web scraping helps collect these unstructured data and store it in a structured form. There are different ways to scrape websites such as online Services, APIs or writing your own code.

Prerequisites

This post is primarily aimed at developers who have some level of experience with Javascript. However, if you have a firm understanding of Web Scraping but have no experience with Javascript, this post could still prove useful.Below are the recommended prerequisites for this article:

  • ✅ Experience with Javascript
  • ✅ Experience using DevTools to extract selectors of elements
  • ✅ Some experience with ES6 Javascript (Optional)

⭐ Make sure to check out the resources at the end of this article to learn more!

Outcomes

After reading this post will be able to:

  • Have a functional understanding of NodeJS
  • Use multiple HTTP clients to assist in the web scraping process
  • Use multiple modern and battle-tested libraries to scrape the web

Understanding NodeJS: A brief introduction

Javascript is a simple and modern language that was initially created to add dynamic behavior to websites inside the browser. When a website is loaded, Javascript is run by the browser's Javascript Engine and converted into a bunch of code that the computer can understand.

For Javascript to interact with your browser, the browser provides a Runtime Environment (document, window, etc.).

This means that Javascript is not the kind of programming language that can interact with or manipulate the computer or it's resources directly. Servers, on the other hand, are capable of directly interacting with the computer and its resources, which allows them to read files or store records in a database.

When introducing NodeJS, the crux of the idea was to make Javascript capable of running not only client-side but also server-side. To make this possible, Ryan Dahl, a skilled developer took Google Chrome's v8 Javascript Engine and embedded it with a C++ program named Node.

So, NodeJS is a runtime environment that allows an application written in Javascript to be run on a server as well.

As opposed to how most languages, including C and C++, deal with concurrency, which is by employing multiple threads, NodeJS makes use of a single main thread and utilizes it to perform tasks in a non-nlocking manner with the help of the Event Loop.

Putting up a simple web server is fairly simple as shown below:

If you have NodeJS installed and you run the above code by typing(without the < and >) in node <YourFileNameHere>.js opening up your browser, and navigating to localhost:3000, you will see some text saying, “Hello World”. NodeJS is ideal for applications that are I/O intensive.

HTTP clients: querying the web

HTTP clients are tools capable of sending a request to a server and then receiving a response from it. Almost every tool that will be discussed in this article uses an HTTP client under the hood to query the server of the website that you will attempt to scrape.

Request

Request is one of the most widely used HTTP clients in the Javascript ecosystem. However, currently, the author of the Request library has officially declared that it is deprecated. This does not mean it is unusable. Quite a lot of libraries still use it, and it is every bit worth using.

It is fairly simple to make an HTTP request with Request:

You can find the Request library at GitHub, and installing it is as simple as running npm install request.

You can also find the deprecation notice and what this means here. If you don't feel safe about the fact that this library is deprecated, there are other options down below!

Axios

Axios is a promise-based HTTP client that runs both in the browser and NodeJS. If you use TypeScript, then Axios has you covered with built-in types.

Making an HTTP request with Axios is straight-forward. It ships with promise support by default as opposed to utilizing callbacks in Request:

If you fancy the async/await syntax sugar for the promise API, you can do that too. But since top level await is still at stage 3, we will have to make use of an async function instead:

All you have to do is call getForum! You can find the Axios library at Github and installing Axios is as simple as npm install axios.

SuperAgent

Much like Axios, SuperAgent is another robust HTTP client that has support for promises and the async/await syntax sugar. It has a fairly straightforward API like Axios, but SuperAgent has more dependencies and is less popular.

Regardless, making an HTTP request with Superagent using promises, async/await, or callbacks looks like this:

You can find the SuperAgent library at GitHub and installing Superagent is as simple as npm install superagent.

For the upcoming few web scraping tools, Axios will be used as the HTTP client.

Note that there are other great HTTP clients for web scrapinglike node-fetch!

Regular expressions: the hard way

The simplest way to get started with web scraping without any dependencies is to use a bunch of regular expressions on the HTML string that you fetch using an HTTP client. But there is a big tradeoff. Regular expressions aren't as flexible and both professionals and amateurs struggle with writing them correctly.

For complex web scraping, the regular expression can also get out of hand. With that said, let's give it a go. Say there's a label with some username in it, and we want the username. This is similar to what you'd have to do if you relied on regular expressions:

In Javascript, match() usually returns an array with everything that matches the regular expression. In the second element(in index 1), you will find the textContent or the innerHTML of the <label>tag which is what we want. But this result contains some unwanted text (“Username: “), which has to be removed.

As you can see, for a very simple use case the steps and the work to be done are unnecessarily high. This is why you should rely on something like an HTML parser, which we will talk about next.

Cheerio: Core jQuery for traversing the DOM

Cheerio is an efficient and light library that allows you to use the rich and powerful API of jQuery on the server-side. If you have used jQuery previously, you will feel right at home with Cheerio. It removes all of the DOM inconsistencies and browser-related features and exposes an efficient API to parse and manipulate the DOM.

As you can see, using Cheerio is similar to how you'd use jQuery.

However, it does not work the same way that a web browser works, which means it does not:

  • Render any of the parsed or manipulated DOM elements
  • Apply CSS or load any external resource
  • Execute Javascript

So, if the website or web application that you are trying to crawl is Javascript-heavy (for example a Single Page Application), Cheerio is not your best bet. You might have to rely on other options mentionned later in this article.

To demonstrate the power of Cheerio, we will attempt to crawl the r/programming forum in Reddit and, get a list of post names.

First, install Cheerio and axios by running the following command:npm install cheerio axios.

Then create a new file called crawler.js, and copy/paste the following code:

getPostTitles() is an asynchronous function that will crawl the Reddit's old r/programming forum. First, the HTML of the website is obtained using a simple HTTP GET request with the axios HTTP client library. Then the HTML data is fed into Cheerio using the cheerio.load() function.

With the help of the browser Dev-Tools, you can obtain the selector that is capable of targeting all of the postcards. If you've used jQuery, the $('div > p.title > a') is probably familiar. This will get all the posts. Since you only want the title of each post individually, you have to loop through each post. This is done with the help of the each() function.

To extract the text out of each title, you must fetch the DOM element with the help of Cheerio (el refers to the current element). Then, calling text() on each element will give you the text.

Now, you can pop open a terminal and run node crawler.js. You'll then see an array of about 25 or 26 different post titles (it'll be quite long). While this is a simple use case, it demonstrates the simple nature of the API provided by Cheerio.

If your use case requires the execution of Javascript and loading of external sources, the following few options will be helpful.

JSDOM: the DOM for Node

JSDOM is a pure Javascript implementation of the Document Object Model to be used in NodeJS. As mentioned previously, the DOM is not available to Node, so JSDOM is the closest you can get. It more or less emulates the browser.

Once a DOM is created, it is possible to interact with the web application or website you want to crawl programmatically, so something like clicking on a button is possible. If you are familiar with manipulating the DOM, using JSDOM will be straightforward.

As you can see, JSDOM creates a DOM. Then you can manipulate this DOM with the same methods and properties you would use while manipulating the browser DOM.

To demonstrate how you could use JSDOM to interact with a website, we will get the first post of the Reddit r/programming forum and upvote it. Then, we will verify if the post has been upvoted.

Start by running the following command to install JSDOM and Axios:npm install jsdom axios

Then, make a file named crawler.js and copy/paste the following code:

upvoteFirstPost() is an asynchronous function that will obtain the first post in r/programming and upvote it. To do this, axios sends an HTTP GET request to fetch the HTML of the URL specified. Then a new DOM is created by feeding the HTML that was fetched earlier.

The JSDOM constructor accepts the HTML as the first argument and the options as the second. The two options that have been added perform the following functions:

  • runScripts: When set to “dangerously”, it allows the execution of event handlers and any Javascript code. If you do not have a clear idea of the credibility of the scripts that your application will run, it is best to set runScripts to “outside-only”, which attaches all of the Javascript specification provided globals to the window object, thus preventing any script from being executed on the inside.
  • resources: When set to “usable”, it allows the loading of any external script declared using the <script> tag (e.g, the jQuery library fetched from a CDN).

Once the DOM has been created, you can use the same DOM methods to get the first post's upvote button and then click on it. To verify if it has been clicked, you could check the classList for a class called upmod. If this class exists in classList, a message is returned.

Now, you can pop open a terminal and run node crawler.js. You'll then see a neat string that will tell you if the post has been upvoted. While this example use case is trivial, you could build on top of it to create something powerful (for example, a bot that goes around upvoting a particular user's posts).

If you dislike the lack of expressiveness in JSDOM and your crawling relies heavily on such manipulations or if there is a need to recreate many different DOMs, the following options will be a better match.

Puppeteer: the headless browser

Puppeteer, as the name implies, allows you to manipulate the browser programmatically, just like how a puppet would be manipulated by its puppeteer. It achieves this by providing a developer with a high-level API to control a headless version of Chrome by default and can be configured to run non-headless.

Taken from the Puppeteer Docs (Source)

Puppeteer is particularly more useful than the aforementioned tools because it allows you to crawl the web as if a real person were interacting with a browser. This opens up a few possibilities that weren't there before:

  • You can get screenshots or generate PDFs of pages.
  • You can crawl a Single Page Application and generate pre-rendered content.
  • You can automate many different user interactions, like keyboard inputs, form submissions, navigation, etc.

It could also play a big role in many other tasks outside the scope of web crawling like UI testing, assist performance optimization, etc.

Quite often, you will probably want to take screenshots of websites or, get to know about a competitor's product catalog. Puppeteer can be used to do this. To start, install Puppeteer by running the following command:npm install puppeteer

This will download a bundled version of Chromium which takes up about 180 to 300 MB, depending on your operating system. If you wish to disable this and point Puppeteer to an already downloaded version of Chromium, you must set a few environment variables.

This, however, is not recommended. Ff you truly wish to avoid downloading Chromium and Puppeteer for this tutorial, you can rely on the Puppeteer playground.

Let's attempt to get a screenshot and PDF of the r/programming forum in Reddit, create a new file called crawler.js, and copy/paste the following code:

getVisual() is an asynchronous function that will take a screenshot and PDF of the value assigned to the URL variable. To start, an instance of the browser is created by running puppeteer.launch(). Then, a new page is created. This page can be thought of like a tab in a regular browser. Then, by calling page.goto() with the URL as the parameter, the page that was created earlier is directed to the URL specified. Finally, the browser instance is destroyed along with the page.

Once that is done and the page has finished loading, a screenshot and PDF will be taken using page.screenshot() and page.pdf() respectively. You could also listen to the Javascript load event and then perform these actions, which is highly recommended at the production level.

When you run the code type in node crawler.js to the terminal, after a few seconds, you will notice that two files by the names screenshot.jpg and page.pdf have been created.

Also, we've written a complete guide on how to download a file with Puppeteer. You should check it out!

Nightmare: an alternative to Puppeteer

Nightmare is another a high-level browser automation library like Puppeteer. It uses Electron but is said to be roughly twice as fast as it's predecessor PhantomJS and it's more modern.

If you dislike Puppeteer or feel discouraged by the size of the Chromium bundle, Nightmare is an ideal choice. To start, install the Nightmare library by running the following command:npm install nightmare

Once Nightmare has been downloaded, we will use it to find ScrapingBee's website through a Google search. To do so, create a file called crawler.js and copy/paste the following code into it:

First, a Nightmare instance is created. Then, this instance is directed to the Google search engine by calling goto() once it has loaded. The search box is fetched using its selector. Then the value of the search box (an input tag) is changed to “ScrapingBee”.

After this is finished, the search form is submitted by clicking on the “Google Search” button. Then, Nightmare is told to wait untill the first link has loaded. Once it has loaded, a DOM method will be used to fetch the value of the href attribute of the anchor tag that contains the link.

Finally, once everything is complete, the link is printed to the console. To run the code, type in node crawler.js to your terminal.

Summary

That was a long read! But now you understand the different ways to use NodeJS and it's rich ecosystem of libraries to crawl the web in any way you want. To wrap up, you learned:

  • NodeJS is a Javascript runtime that allow Javascript to be run server-side. It has a non-blocking nature thanks to the Event Loop.
  • HTTP clients such as Axios, SuperAgent, Node fetch and Request are used to send HTTP requests to a server and receive a response.
  • Cheerio abstracts the best out of jQuery for the sole purpose of running it server-side for web crawling but does not execute Javascript code.
  • JSDOM creates a DOM per the standard Javascript specification out of an HTML string and allows you to perform DOM manipulations on it.
  • Puppeteer and Nightmare are high-level browser automation libraries, that allow you to programmatically manipulate web applications as if a real person were interacting with them.

While this article tackles the main aspects of web scraping with NodeJS, it does not talk about web scraping without getting blocked.

If you want to learn how to avoid getting blocked, read our complete guide, and if you don't want to deal with this, you can always use our web scraping API.

Happy Scraping!

Resources

Would you like to read more? Check these links out:

  • NodeJS Website - Contains documentation and a lot of information on how to get started.
  • Puppeteer's Docs - Contains the API reference and guides for getting started.
  • Playright An alternative to Puppeteer, backed by Microsoft.
  • ScrapingBee's Blog - Contains a lot of information about Web Scraping goodies on multiple platforms.

Introduction

Web scraping or crawling is the process of fetching data from a third-party website by downloading and parsing the HTML code to extract the data you want.

Web Scraping Tools Chrome

“But you should use an API for this!'

However, not every website offers an API, and APIs don't always expose every piece of information you need. So, it's often the only solution to extract website data.

There are many use cases for web scraping:

  • E-commerce price monitoring
  • News aggregation
  • Lead generation
  • SEO (search engine result page monitoring)
  • Bank account aggregation (Mint in the US, Bankin’ in Europe)
  • Individuals and researchers building datasets otherwise not available.

The main problem is that most websites do not want to be scraped. They only want to serve content to real users using real web browsers (except Google - they all want to be scraped by Google).

So, when you scrape, you do not want to be recognized as a robot. There are two main ways to seem human: use human tools and emulate human behavior.

This post will guide you through all the tools websites use to block you and all the ways you can successfully overcome these obstacles.

Emulate Human Tool: Headless Chrome

Why Using Headless Browsing?

When you open your browser and go to a webpage, it almost always means that you ask an HTTP server for some content. One of the easiest ways to pull content from an HTTP server is to use a classic command-line tool such as cURL.

The thing is, if you just do: curl www.google.com, Google has many ways to know that you are not a human (for example by looking at the headers). Headers are small pieces of information that go with every HTTP request that hits the servers. One of those pieces of information precisely describes the client making the request, This is the infamous “User-Agent” header. Just by looking at the “User-Agent” header, Google knows that you are using cURL. If you want to learn more about headers, the Wikipedia page is great. As an experiment, just go over here. This webpage simply displays the headers information of your request.

Headers are easy to alter with cURL, and copying the User-Agent header of a legit browser could do the trick. In the real world, you'd need to set more than one header. But it is not difficult to artificially forge an HTTP request with cURL or any library to make the request look exactly like a request made with a browser. Everybody knows this. So, to determine if you are using a real browser, websites will check something that cURL and library can not do: executing Javascript code.

Do you speak Javascript?

The concept is simple, the website embeds a Javascript snippet in its webpage that, once executed, will “unlock” the webpage. If you're using a real browser, you won't notice the difference. If you're not, you'll receive an HTML page with some obscure Javascript code in it:

an actual example of such a snippet

Once again, this solution is not completely bulletproof, mainly because it is now very easy to execute Javascript outside of a browser with Node.js. However, the web has evolved and there are other tricks to determine if you are using a real browser.

Headless Browsing

Trying to execute Javascript snippets on the side with Node.js is difficult and not robust. And more importantly, as soon as the website has a more complicated check system or is a big single-page application cURL and pseudo-JS execution with Node.js become useless. So the best way to look like a real browser is to actually use one.

Headless Browsers will behave like a real browser except that you will easily be able to programmatically use them. The most popular is Chrome Headless, a Chrome option that behaves like Chrome without all of the user interface wrapping it.

The easiest way to use Headless Chrome is by calling a driver that wraps all functionality into an easy API. SeleniumPlaywright and Puppeteer are the three most famous solutions.

However, it will not be enough as websites now have tools that detect headless browsers. This arms race has been going on for a long time.

While these solutions can be easy to do on your local computer, it can be trickier to make this work at scale.

Web scraping tools chrome browser

Managing lots of Chrome headless instances is one of the many problems we solve at ScrapingBee

Tired of getting blocked while scraping the web? Our API handles headless browsers and rotates proxies for you.

Browser Fingerprinting

Everyone, especially front-end devs, know that every browser behaves differently. Sometimes it's about rendering CSS, sometimes Javascript, and sometimes just internal properties. Most of these differences are well-known and it is now possible to detect if a browser is actually who it pretends to be. This means the website asks “do all of the browser properties and behaviors match what I know about the User-Agent sent by this browser?'.

This is why there is an everlasting arms race between web scrapers who want to pass themselves as a real browser and websites who want to distinguish headless from the rest.

Chrome

However, in this arms race, web scrapers tend to have a big advantage here is why:

Screenshot of Chrome malware alert

Most of the time, when a Javascript code tries to detect whether it's being run in headless mode, it is when a malware is trying to evade behavioral fingerprinting. This means that the Javascript will behave nicely inside a scanning environment and badly inside real browsers. And this is why the team behind the Chrome headless mode is trying to make it indistinguishable from a real user's web browser in order to stop malware from doing that. Web scrapers can profit from this effort.

Another thing to know is that while running 20 cURL in parallel is trivial and Chrome Headless is relatively easy to use for small use cases, it can be tricky to put at scale. Because it uses lots of RAM, managing more than 20 instances of it is a challenge.

If you want to learn more about browser fingerprinting I suggest you take a look at Antoine Vastel's blog, which is entirely dedicated to this subject.

That's about all you need to know about how to pretend like you are using a real browser. Let's now take a look at how to behave like a real human.

TLS Fingerprinting

What is it?

TLS stands for Transport Layer Security and is the successor of SSL which was basically what the “S” of HTTPS stood for.

This protocol ensures privacy and data integrity between two or more communicating computer applications (in our case, a web browser or a script and an HTTP server).

Similar to browser fingerprinting the goal of TLS fingerprinting is to uniquely identify users based on the way they use TLS.

How this protocol works can be split into two big parts.

First, when the client connects to the server, a TLS handshake happens. During this handshake, many requests are sent between the two to ensure that everyone is actually who they claim to be.

Then, if the handshake has been successful the protocol describes how the client and the server should encrypt and decrypt the data in a secure way. If you want a detailed explanation, check out this great introduction by Cloudflare.

Most of the data point used to build the fingerprint are from the TLS handshake and if you want to see what does a TLS fingerprint looks like, you can go visit this awesome online database.

On this website, you can see that the most used fingerprint last week was used 22.19% of the time (at the time of writing this article).

A TLS fingerprint

This number is very big and at least two orders of magnitude higher than the most common browser fingerprint. It actually makes sense as a TLS fingerprint is computed using way fewer parameters than a browser fingerprint.

Those parameters are, amongst others:

  • TLS version
  • Handshake version
  • Cipher suites supported
  • Extensions

If you wish to know what your TLS fingerprint is, I suggest you visit this website.

How do I change it?

Ideally, in order to increase your stealth when scraping the web, you should be changing your TLS parameters. However, this is harder than it looks.

Tools chrome web browser

Firstly, because there are not that many TLS fingerprints out there, simply randomizing those parameters won't work. Your fingerprint will be so rare that it will be instantly flagged as fake.

Secondly, TLS parameters are low-level stuff that rely heavily on system dependencies. So, changing them is not straight-forward.

For examples, the famous Python requests module doesn't support changing the TLS fingerprint out of the box. Here are a few resources to change your TLS version and cypher suite in your favorite language:

  • Python with HTTPAdapter and requests
  • NodeJS with the TLS package
  • Ruby with OpenSSL

Keep in mind that most of these libraries rely on the SSL and TLS implementation of your system, OpenSSL is the most widely used, and you might need to change its version in order to completely alter your fingerprint.

Web Scraping Tools Chrome Web

Emulate Human Behaviour: Proxy, Captcha Solving and Request Patterns

Proxy Yourself

A human using a real browser will rarely request 20 pages per second from the same website. So if you want to request a lot of page from the same website you have to trick the website into thinking that all those requests come from different places in the world i.e: different I.P addresses. In other words, you need to use proxies.

Proxies are not very expensive: ~1$ per IP. However, if you need to do more than ~10k requests per day on the same website, costs can go up quickly, with hundreds of addresses needed. One thing to consider is that proxy IPs needs to be constantly monitored in order to discard the one that is not working anymore and replace it.

There are several proxy solutions on the market, here are the most used rotating proxy providers: Luminati Network, Blazing SEO and SmartProxy.

There is also a lot of free proxy lists and I don’t recommend using these because they are often slow and unreliable, and websites offering these lists are not always transparent about where these proxies are located. Free proxy lists are usually public, and therefore, their IPs will be automatically banned by the most website. Proxy quality is important. Anti-crawling services are known to maintain an internal list of proxy IP so any traffic coming from those IPs will also be blocked. Be careful to choose a good reputation. This is why I recommend using a paid proxy network or build your own.

Another proxy type that you could look into is mobile, 3g and 4g proxies. This is helpful for scraping hard-to-scrape mobile first websites, like social media.

To build your own proxy you could take a look at scrapoxy, a great open-source API, allowing you to build a proxy API on top of different cloud providers. Scrapoxy will create a proxy pool by creating instances on various cloud providers (AWS, OVH, Digital Ocean). Then, you will be able to configure your client so it uses the Scrapoxy URL as the main proxy, and Scrapoxy it will automatically assign a proxy inside the proxy pool. Scrapoxy is easily customizable to fit your needs (rate limit, blacklist …) it can be a little tedious to put in place.

You could also use the TOR network, aka, The Onion Router. It is a worldwide computer network designed to route traffic through many different servers to hide its origin. TOR usage makes network surveillance/traffic analysis very difficult. There are a lot of use cases for TOR usage, such as privacy, freedom of speech, journalists in a dictatorship regime, and of course, illegal activities. In the context of web scraping, TOR can hide your IP address, and change your bot’s IP address every 10 minutes. The TOR exit nodes IP addresses are public. Some websites block TOR traffic using a simple rule: if the server receives a request from one of the TOR public exit nodes, it will block it. That’s why in many cases, TOR won’t help you, compared to classic proxies. It's worth noting that traffic through TOR is also inherently much slower because of the multiple routing.

Captchas

Sometimes proxies will not be enough. Some websites systematically ask you to confirm that you are a human with so-called CAPTCHAs. Most of the time CAPTCHAs are only displayed to suspicious IP, so switching proxy will work in those cases. For the other cases, you'll need to use CAPTCHAs solving service (2Captchas and DeathByCaptchas come to mind).

While some Captchas can be automatically resolved with optical character recognition (OCR), the most recent one has to be solved by hand.

Old captcha, breakable programatically
Google ReCaptcha V2

If you use the aforementioned services, on the other side of the API call you'll have hundreds of people resolving CAPTCHAs for as low as 20ct an hour.

But then again, even if you solve CAPCHAs or switch proxy as soon as you see one, websites can still detect your data extraction process.

Request Pattern

Another advanced tool used by websites to detect scraping is pattern recognition. So if you plan to scrape every IDs from 1 to 10 000 for the URL www.example.com/product/, try to not do it sequentially or with a constant rate of request. You could, for example, maintain a set of integer going from 1 to 10 000 and randomly choose one integer inside this set and then scrape your product.

Some websites also do statistic on browser fingerprint per endpoint. This means that if you don't change some parameters in your headless browser and target a single endpoint, they might block you.

Websites also tend to monitor the origin of traffic, so if you want to scrape a website if Brazil, try to not do it with proxies in Vietnam.

But from experience, I can tell you that rate is the most important factor in “Request Pattern Recognition”, so the slower you scrape, the less chance you have of being discovered.

Emulate Machine Behaviour: Reverse engineering of API

Sometimes, the server expect the client to be a machine. In these cases, hiding yourself is way easier.

Reverse engineering of API

Basically, this “trick” comes down to two things:

  1. Analyzing a web page behaviour to find interesting API calls
  2. Forging those API calls with your code

For example, let's say that I want to get all the comments of a famous social network. I notice that when I click on the “load more comments” button, this happens in my inspector:

Request being made when clicking more comments

Notice that we filter out every requests except “XHR” ones to avoid noise.

When we try to see which request is being made and which response do we get… - bingo!

Request response

Now if we look at the “Headers” tab we should have everything we need to replay this request and understand the value of each parameters. This will allow us to make this request from a simple HTTP client.

Scraping
HTTP Client response

The hardest part of this process is to understand the role of each parameter in the request. Know that you can left-click on any request in the Chrome dev tool inspector, export in HAR format and then import it in your favorite HTTP client, (I love Paw and PostMan).

This will allow you to have all the parameters of a working request laid out and will make your experimentation much faster and fun.

Previous request imported in Paw

Reverse-Engineering of Mobile Apps

Tools Chrome Web Browser

The same principles apply when it comes to reverse engineering mobile app. You will want to intercept the request your mobile app make to the server and replay it with your code.

Doing this is hard for two reasons:

  • To intercept requests, you will need a Man In The Middle proxy. (Charles proxy for example)
  • Mobile Apps can fingerprint your request and obfuscate them more easily than a web app

For example, when Pokemon Go was released a few years ago, tons of people cheated the game after reverse-engineering the requests the mobile app made.

What they did not know was that the mobile app was sending a “secret” parameter that was not sent by the cheating script. It was easy for Niantic to then identify the cheaters. A few weeks after, a massive amount of players were banned for cheating.

Also, here is an interesting example about someone who reverse-engineered the Starbucks API.

Conclusion

Here is a recap of all the anti-bot techniques we saw in this article:

Anti-bot techniqueCounter measureSupported by ScrapingBee
Browser FingerprintingHeadless browsers
IP-rate limitingRotating proxies
Banning Data center IPsResidential IPs
TLS FingerprintingForge and rotate TLS fingerprints
Captchas on suspicious activityAll of the above
Systematic CaptchasCaptchas-solving tools and services

I hope that this overview will help you understand web-scraping and that you learned a lot reading this article.

We leverage everything I talked about in this post at ScrapingBee. Our web scraping API handles thousands of requests per second without ever being blocked. If you don’t want to lose too much time setting everything up, make sure to try ScrapingBee. The first 1k API calls are on us :).

Web Scraping Tools Open Source

We recently published a guide about the best web scraping tools on the market, don't hesitate to take a look!