Advanced Bot Protection 2-27-2023
Advanced Bot Protection 2-27-2023
Contents
Understanding Advanced Bot Protection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Understanding Bot Protection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Understanding How Advanced Bot Protection Handles Traffic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Understanding the Problem of False Positives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Managing False Positives in Practice. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Testing for False Positives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Understanding the "Rate Limiting" and "Identify Eventually" Conditions and False Positives. . . . . . . . . . . . . . . . . . . 19
Understanding How Imperva CloudWAF Integrates with Advanced Bot Protection. . . . . . . . . . . . . . . 20
Getting Started with Imperva Advanced Bot Protection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Getting Started with Advanced Bot Protection - Using a Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Configuring the True Client IP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Working with Advanced Bot Protection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Creating a Website Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Creating a Website Group - Using a Connector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Understanding the Advanced Bot Protection Display. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Understanding the Website Groups Window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Understanding the Issues Dialog Box. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Understanding the Progress Bar. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Understanding the Policies Window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Understanding the Conditions Window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Analyzing your Bot Protection Activity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Understanding the Individual Element Activity Graphs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Understanding the Activity Graphs of Website Groups, Websites, and Policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Understanding the Activity Graphs of Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Understanding the Dashboard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Accessing the Dashboard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Using the Filters in the Standard Dashboard Displays. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Understanding Regions in the Dashboard Displays. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Understanding the Traffic Overview Display. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Understanding the Other (non-Traffic Overview) Displays. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Understanding the Usage Dashboard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Exporting Dashboard Data to a Near Real Time SIEM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Working with Policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Working with the Default Policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Accessing the Default Policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Understanding the Structure of the Policies and the Default Policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Configuring per-Path Policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Understanding per-Path Policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Search engines are good bots. They index all your web pages, enabling your users to do useful searches and find your
content.
Bad bots are run by people who are after the data you wish to protect. For example, price scraping and content bots
give your competitors an unfair advantage by getting them high volumes of pricing data, and then they can pitch
lower.
• Distinguish between legitimate human traffic and bot traffic, and intercept the bad bot traffic reliably.
• Distinguish between good bots and bad bots, and allow the good bots through.
The basic architecture of a web application and its connection to the outside world is presented below.
Your web application on the right is connected to the outside world via the Imperva CloudWAF.
Traffic flows from the client machines via the CloudWAF. CloudWAF forwards HTTP requests from the client to the web
application, and forwards the returning traffic from the web application back to the client.
As you can see, the Advanced Bot Protection service communicates with CloudWAF, only. An HTTP request is received
from the client by CloudWAF, and then the Advanced Bot Protection service inspects the request header in order to
determine the source of the request - human or bot. The Advanced Bot Protection service analyzes the request header
and, based on the result of that analysis, sends an instruction back to CloudWAF. It is CloudWAF that carries out the
instruction regarding the HTTP request. If instructed to block the request, it is CloudWAF that blocks the request. If
instructed to serve a captcha page to the client, it is CloudWAF that serves the captcha page, and so on.
9. The client's browser executes the Javascript, which interrogates the client's machine and browser,
fingerprinting it, and sending the fingerprint to the Advanced Bot Protection service.
10. The Advanced Bot Protection service analyzes the fingerprint, comparing its richer data to the Conditions in
your Policy, and sends a token to the client via CloudWAF.
11. CloudWAF acts on the instruction from The Advanced Bot Protection service, allowing the request through, or
blocking it, or taking some other action.
12. The client then stores the token as a cookie.
Notes:
▪ If a bad bot does not support Javascript - and some do not - it will be unable to run the initial
script and that inability is recognized by Advanced Bot Protection.
Sometimes, legitimate users appear like bots that do not support Javascript. For example, if a user
has a very slow connection, or is using a browser extension to block most Javascript files, that
user's traffic will appear like that of a bot that does not support Javascript. In these cases, the
Identify Directive redirects the user to an identification page. A bot is stopped right there. A
legitimate user’s browser processes the Javascript as above and is allowed through. Should a user
run a browser extension that blocks the Javascript file, they will eventually see a message on the
Identify page informing them of such. Most users that run these browser extensions recognize what
they are doing and then allow the Javascript to continue browsing your site.
▪ If a bad bot does support javascript, Advanced Bot Protection’s browser automation
detection detects and flags that bot.
▪ The fingerprinting in step 9 and any requests after step 12 above can be understood with the
following analogy. A young person entering a club with an age limit has to show ID. Security
checks the person's ID and allows entry based on age. But the security guard also marks the
young person's arm with an indelible ink stamp. The stamp is like a request with a cookie.
Now a malicious user can tamper with the browser payload returned by the challenge response.
This is like a young person forging their ID card. This is mitigated by Advanced Bot Protection's bad
challenge postback Condition.
A malicious user can also tamper with the cookie. This is like a young person faking the stamp. This
is mitigated by Advanced Bot Protection's invalid token Condition.
Genuine user traffic does not match either of the above two Conditions, so your Policies should
block access when either of them is matched.
Note: If you want to use Imperva Advanced Bot Protection, but you do not want it integrated with
Imperva CloudWAF, you can use a different Integration known as a Connector, instead of
CloudWAF. Currently, Advanced Bot Protection can be integrated with the following Connectors:
• Cloudflare
• F5
• Lambda@Edge on AWS Cloudfront
• Nginx
• Fastly
High rates of false positives are a threat to a healthy website. If a significant number of users are being blocked or sent
to captcha pages too often, those users will be unhappy.
For this reason, you will not want to activate Conditions that produce high rates of false positives. You can identify
those Conditions by analyzing the traffic graphs.
Most customers measure the false positive rate by dividing the total number of captchas attempted by the total
number of captchas served. For customers with bots that ruin the captcha metrics with many obvious failed attempts,
it is better to calculate the false positive rate by dividing the number of successful captchas by the total number
served.
Some bot operators will employ captcha solving services to bypass captchas that you present to their bots. Should
you see a suspicious rise in captcha solves, simply move that Condition to block instead of captcha. If that results in
too many blocks issued to real humans, you must craft a new Condition to better pinpoint the bad bot traffic.
Once you have gained an understanding of the basic workings of Advanced Bot Protection, see Managing False
Positives in Practice.
For more information on managing false positives in practice, refer to the following sections:
Captchas
If in the traffic graphs you see that those captchas are solved, or you start getting complaints from customers about
excessive captcha references, then you can conclude that this Condition is producing false positives. You must not
move it into the Block Directive, and you should further weigh the damage from keeping it active and annoying
legitimate users with captchas, against the potential damage from deactivating it and allowing that particular bot
attack.
Human traffic is typically cyclic while bot traffic is typically either flat or spiky. If a Condition that you would normally
expect to be triggered by a bot is showing a graph that is similar to human traffic, that indicates false positives. So if
you observe the traffic from the Known violator data centers or the Bad user agents Conditions and you see the
behavior over time is actually cyclic, you may conclude that the one that is cyclic is actually legitimate traffic that you
have triggered incorrectly.
Understanding the "Rate Limiting" and "Identify Eventually" Conditions and False Positives
The Rate limiting Condition should always be in the Captcha Directive and you should tune it for the traffic patterns
of your Website.
Note that the Rate limiting and Identify eventually Conditions may elicit excessive false positives for the following
reason.
Not every URL Path on the your Website returns viewable HTML pages. Some Paths return machine-readable content
which is accessed by Javascripts running on that Webpage. These Paths are API endpoints and some of them generate
a lot of requests, for example, when a text field gives suggestions as the user enters characters, each character entry
generates a request that needs to be processed. The problem here is that if you do not identify these properly, the
request counts will be inflated, and these might trigger Rate Limit or Identify eventually Conditions in
circumstances where such triggers are not needed. (Note also that the captcha and identify Directives do not actually
work with Paths that are API endpoints, and such Paths need their own Policies in which these two Directives are
never used. For more information, see Configuring per-Path Policies for Endpoints with API Calls.)
You can mitigate this effect with the proper use of per-Path Policies in which with Rate Limiting is either disabled, or
the request counts go into a Custom Scope. For more information, see Understanding per-Path Policies and Rate
Limiting.
At the same time and for the same protected assets, CloudWAF acts as an Integration for its other services, for
example Account Takeover. And these services also send instructions to the CloudWAF to act on incoming traffic.
It is quite feasible that an incoming request will elicit a response from more than one CloudWAF service, including
Advanced Bot Protection.
If more than one CloudWAF service gives an instruction to CloudWAF to act on an incoming request, CloudWAF will
always treat any request with the most severe instruction it receives from any of its services.
For example, if a request elicits a captcha instruction from Advanced Bot Protection, but a block instruction from a
different service, then CloudWAF executes the block instruction on that request.
• block
• captcha
• identify
Note: If you want to use Imperva Advanced Bot Protection but you want to use a Connector instead
of CloudWAF for your Integration, see Getting Started with Advanced Bot Protection - Using a
Connector.
1. In the CloudWAF account for which to wish to add Advanced Bot Protection, add the website that you wish to
protect. For more information, see Onboarding a Site – Web Protection and CDN.
2. [OPTIONAL] Configure the error page. This is the page that is shown to the users if it appears that they have a
slow connection, or their javscript is disabled, or they have an ad blocker, or their cookies are disabled. For
more information, see Custom Error Pages.
3. Select your captcha provider. For more information, see Web Protection - Security Settings.
4. In the navigation pane, click Advanced Bot Protection The Advanced Bot Protection Launch window appears.
5. Click Launch Advanced Bot Protection. The Advanced Bot Protection window appears.
Since many websites have clones that perform identical functions to the "parent website", e.g. for languages,
like acmebooks.com, acmebooks.co.fr, and acmebooks.co.nl, and so on, Advanced Bot Protection allows you to
group your websites into Website Groups and then you apply all your configurations to the Website Group,
saving a lot of time. You cannot apply configurations to an individual Website - only to a Website Group.
For more information, see Creating a Website Group. Your first Website Group contains the Website you wish to
protect, and activates the Default Policy. This provides you with a good out-of-the-box bot protection level for
your Website.
1. Carry out the integration procedure for your chosen Connector. For more information, see Integrating Advanced
Bot Protection with a Connector.
2. In the CloudWAF navigation pane, click Advanced Bot Protection. The Advanced Bot Protection Launch
window appears.
3. Click Launch Advanced Bot Protection. The Advanced Bot Protection window appears.
Since many websites have clones that perform identical functions to the "parent website", e.g. for languages,
like acmebooks.com, acmebooks.co.fr, and acmebooks.co.nl, and so on, Advanced Bot Protection allows you to
group your websites into Website Groups and then you apply all your configurations to the Website Group,
saving a lot of time. You cannot apply configurations to an individual Website - only to a Website Group.
For more information, see Creating a Website Group. Your first Website Group contains the Website you wish to
protect, and activates the Default Policy. This provides you with a good out-of-the-box bot protection level for
your Website.
If your setup involves CloudWAF (or a Connector) and Advanced Bot Protection being deployed In Back of Your CDN,
the client IP address that is forwarded to CloudWAF or Advanced Bot Protection is that of the CDN, and not that of the
client machine.
Advanced Bot Protection relies on correct client IP address identification for a number of features, and you must
ensure that you configure Advanced Bot Protection to read the correct true client IP. CDNs are configured to forward
the true client IP in the headers using a parameter like X-Forwarded-For, True-Client-IP, or CF-connecting-ip. Your
CDN and your own setup define the precise parameter used.
You must configure Advanced Bot Protection to read the true client IP as it appears in the headers.
1. Refer to your CDN's documentation to discover the correct header parameter for true client IP.
2. In Advanced Bot Protection, for a particular Website, go to Advanced Settings. For more information, see
Editing a Website.
3. In Advanced Settings, type or paste into the following fields the true client IP header parameter from Step 1.
▪ Challenge IP Lookup Mode: Header Name
▪ Analysis IP Lookup Mode: Header Name
If there are multiple, comma-separated IP addresses specified in your true client IP header, Reverse Index
specifies the (0) indexed IP to select from the end of the list.
4. Click Save.
Since many websites have clones that perform identical functions to the "parent website", e.g. for languages, like
acmebooks.com, acmebooks.co.fr, and acmebooks.co.nl, and so on, Advanced Bot Protection allows you to group
your websites into Website Groups and then you apply all your configurations to the Website Group, saving a lot of
time. You cannot apply configurations to an individual Website - only to a Website Group.
1. Create your first Website Group. Even if you are protecting a single website, you must use a Website Group
because it is at the Website Group level that you add and configure your Policies. For more information, see
Adding a Website Group.
When you create a Website Group, you are required to add one Website to it. The Default Policy applies to the
Website Group by default until you make changes. The Default Policy provides you with a good out-of-the-box
bot protection level for your Website.
2. With your Website generating traffic, analyze the type of bot attacks your Website is under by looking at the
Dashboard and at the individual Traffic Graphs for each Website Group, Website, Path, Policy, and Condition. For
more information, see Analyzing the Performance of Bot Protection.
3. Based on your analysis of the traffic on your Website Group or Website, make changes to the Policies that define
that Website's or Website Group's defense.
There are many possibilities for configuration here, ranging from activating or deactivating Conditions, to
adding or removing the Flags within an individual Condition. The out-of-the-box Policies and Conditions
provide powerful protection at the basic level, but the system allows for highly sophisticated configurations
with which you will become familiar with greater use.
4. Update your Configuration. If you have made changes, examine them and then update the system.
Initially, you will probably work with one Website Group, performing actions in steps 2 to 4 above: you examine the
traffic to see the attack patterns, make changes to your Policies and Conditions, update the configuration and start
again. Eventually you will want to expand your system by adding more Websites and more Website Groups.
Note: If you are using a Connector instead of CloudWAF, use the procedure in Creating a Website
Group - Using a Connector.
4. Click the Create Website button. The Create Website dialog box appears.
If you are subscribed to CloudWAF only, the Create Website Group dialog box appears as follows:
If you are subscribed to both CloudWAF and Connectors, the Create Website Group dialog box appears as
follows:
Note: You may be required to configure one of more of the Website parameters after you have
created the Website Group or added a Website. For more information, see Editing a Website.
Name Description
The maximum number of requests to the site in a minute that is allowable before rate
Max requests per minute
limiting is triggered.
The maximum number of requests to the site in a single session that is allowable
Max requests per session
before rate limiting is triggered.
The maximum length of a session that is allowable before rate limiting is triggered.
Max session length
Select the time units from the adjacent drop-down list.
Before you create your first Website Group when using Advanced Bot Protection with a Connector, make sure that you
have integrated Advanced Bot Protection with your Connector. For more information, see Integrating Advanced Bot
Protection with a Connector.
To create a Website Group in your Advanced Bot Protection account when using a Connector:
4. Click the Create Website button. The Create Website dialog box appears.
If you are subscribed to Connectors only, the Create Website Group dialog box appears as follows:
If you are subscribed to both CloudWAF and Connectors, the Create Website Group dialog box appears as
follows:
Note: You may be required to configure one of more of the Website parameters after you have
created the Website Group or added a Website. For more information, see Editing a Website.
Name Description
The maximum number of requests to the site in a minute that is allowable before rate
Max requests per minute
limiting is triggered.
The maximum number of requests to the site in a single session that is allowable
Max requests per session
before rate limiting is triggered.
The maximum length of a session that is allowable before rate limiting is triggered.
Max session length
Select the time units from the adjacent drop-down list.
• Dashboard: Displays graphs that show how various aspects of your traffic and your Advanced Bot Protection's
interventions are performing over time. For more information, see Analyzing Your Bot Protection Activity.
• Settings: The windows in Settings allow you to configure Advanced Bot Protection. The following windows are
available:
• Website Groups: Configure your Website Groups, Websites, Default Policy and per-Path Policy
Assignments. For more information, see Understanding the Website Groups Window.
• Policies: Add, rename, configure and delete your Policies. For more information, see Understanding
the Policies Window.
• Conditions: Add, configure and delete your Conditions and Condition Groups. For more information,
see Understanding the Conditions Window.
The elements of the Website Groups window are summarized in the table below.
Displays a graph showing the traffic for that Analyzing Your Bot Protection
View Activity Graph
Website Group. Activity
Rename Website Group Give your Website Group a different Name. Renaming a Website Group
Delete Website Group Delete your Website Group. Deleting a Website Group
The Issues field for each Website indicates how many issues you may want to be aware of regarding your bot
protection.
In the Website Groups window, hover your mouse over the Issues field for any Website to see details regarding the
outstanding issues.
Name Description
You have not published the Website. If you have made changes to your configuration,
Website has never been
you must publish them for them to take effect. For more information, see Updating a
published
Configuration.
You have no active Conditions in the allow Directive. This means that "good" bots
You have not configured
may receive captchas or be blocked. For more information, see Understanding
your allowlist
Directives and Conditions
It takes about four hours after your configuration has been published for the API
paths to be analyzed. For more information, see Configuring per-Path Policies for
API paths have not been Endpoints with API Calls.
analyzed
Note: This applies only to Website Groups that have Websites where US data region
has been selected.
You have not assigned per- You have not configured per-Path Policies for paths that serve APIs. It is
Path Policies to discovered recommended that you do so to avoid technical issues. For more information, see
API paths Configuring per-Path Policies for Endpoints with API Calls.
You have not activated any mitigation. That means that there are no active
Mitigation activated Conditions in Website's block, captcha, tarpit, or delay Directives. For more
information, see Configuring the Status of a Condition.
The Progress bar for each website indicates how much progress you have made in onboarding that Website for a
minimally acceptable level of bot protection.
In the Website Groups window, hover your mouse over the Progress bar for any website to see details regarding its
onboarding status.
Progress Description
You have active Conditions in the allow Directive. This means that "good" bots will
Allowlist configured
not receive captchas or be blocked.
You have configured per-Path Policies for paths that serve APIs.
API paths analyzed and
assigned Note: This applies only to Website Groups that have Websites where US data region
has been selected.
You have active Conditions in your Website's block, captcha, tarpit, or delay
Mitigation activated
Directives.
The elements of the Policies window are summarized in the table below.
The elements of the Conditions window are summarized in the table below.
Add New Condition Button Add a new Condition. Adding a New Condition
Either:
Type Managing Conditions
• Single Condition
• Condition Group
Displays a graph showing the traffic for that Analyzing Your Bot Protection
View Activity Graph
Condition or Condition Group. Activity
There are two general methods for analyzing your bot protection activity:
• Individual element graphs: You can view the protection activity graph of each element in your Advanced Bot
Protection deployment, those elements being Website Group, Website, Policy, and Condition.
• Dashboard: You can view the activity of one or more Websites in your account, over a configurable time period,
using a range of analytical graphs, in the Dashboard.
Website Groups, Websites, and Policies have activity graphs that show the mitigation activity (which Directives were
triggered) for that element, over the last seven days, in terms of requests per session (RPS).
Conditions
A Condition has an activity graph that shows the number of requests for the following:
Website Groups, Websites, and Policies have activity graphs that show the mitigation activity (which Directives were
triggered) for that element, over the last seven days, in terms of requests per session (RPS).
Note that if a mitigation was not activated, it does not appear on the graph at all.
You can toggle the line for any mitigation by clicking the mitigation's title at the bottom of the graph.
The graphs typically appear as shown in the images below. A mitigation that shows a graph with a cyclic, sinusoidal
shape, indicates that that mitigation might be the result of false positives, and further investigation is required. A
mitigation with spikes indicates a bot attack that triggered that mitigation.
Website Graph
Policy Graph
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Verify that the Website Groups tab is selected.
3. For the Website Group whose graph you wish to see, click the Graph icon .
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Verify that the Website Groups tab is selected.
3. Select a Website Group. The Website Group Configuration window appears.
4. For the Website whose graph you wish to see, click the Graph icon .
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Select the Policies tab. The Policies window appears.
3. For the Policy whose graph you wish to see, click the Graph icon .
A Condition has an activity graph that shows the number of requests for the following:
You can toggle the line for any of the above results by clicking on its title at the bottom of the graph.
Note that if there was more than one triggered Condition in the Directive that was activated, they are all considered
Deciders.
Pay particular attention to the Captcha succeeded line. This is often a good indication of false positives. If you
activate a Condition in the Captcha Directive and a lot of captchas are being solved, you are seeing false positives.
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
3. For the Condition whose graph you wish to see, click the Graph icon .
The dashboard provides you with a range of configurable displays that enable you to perform sophisticated analysis
of the traffic on your Websites and on the defenses you have put up using Advanced Bot Protection, enabling you to
discover attacks and refine your configuration to mitigate those attacks.
The Dashboard is divided into displays, each one showing a different aspect of the traffic or mitigation activity on your
estate.
Looker-powered displays
The following log analysis displays are entirely powered by the Looker business intelligence software. Some of the
proprietary displays have some Looker graphs embedded.
Some of the individual graphs in the other displays are also powered by Looker. For more information on how to use
Looker, refer to the Looker documentation.
Standard displays
At the top of each of the other displays there is a filter than allows you to select a wide variety of ways to examine your
data. For more information, see Using the Filters in the Standard Dashboard Displays.
Expanded view
You can access an expanded view of any dashboard window by clicking on the Expand view icon at the top right.
You can collapse the expanded view by clicking on the Collapse view icon at the top right.
You can view the dashboard to analyze the traffic on your Websites and the defenses you have put up to mitigate
attacks.
At the top of all the standard displays is a filter that enables you to determine precisely the data that your display is
based on.
You select the operator from a drop down list, for example:
For some filters, you select the values from a drop down list, for others you type in text, and for yet others you do both,
as in the Date example above.
For any parameter you can add another filter condition, logically linked to any previous ones by the Boolean OR
operator, by clicking the icon.
You can remove a filter condition by clicking the icon by that filter condition.
1. In Advanced Bot Protection, select the Dashboard menu item. The Dashboard display appears.
2. At the top right, select the display from the drop down list. By default, the display selected is Traffic Overview.
3. At the top left, click Filters. The filters appear.
Because of compliance requirements regarding the location of Personally Identifiable Information (PII) storage, you
must define for each of your Websites the region in the world in which you want that Website's data to be stored.
For more information on how to configure a Website's Data Region, see Understanding the Website Advanced
Settings and Editing a Website.
For the most part, the PII in the HTTP requests managed by Advanced Bot Protection is IP addresses.
Aggregated Data
Due to those same restrictions, you can view your raw data, or analyses based on all your raw data, that comes from a
single region only.
So that you can view analyses of data from across your entire estate irrespective of regional origin, Imperva provides
two dashboard displays that are based on aggregated data: Traffic Overview and Explore Connector Aggregated
Logs. All of the other displays use regional raw data as their input source.
Custom Dashboards
You can create a Custom Dashboard that is comprised of graphs based on raw data. Each of these graphs can be based
on data from a different region, thus enabling you to analyze traffic based on raw data, from multiple regions, on a
single display.
1. In Advanced Bot Protection, select the Dashboard menu item. The Dashboards display appears.
2. In the Regions bar at the top, select any Region except Global or Custom Dashboards.
3. From the drop down list at the top right, select Explore Connector Access Log. A Looker display appears.
4. Create your query and set your display type using the Looker tools. For more information, refer to the Looker
documentation.
5. Click the Settings wheel at the top right.
6. From the drop down menu, select Save > To an existing dashboard. The Add to a Dashboard in this folder
dialog box appears.
7. Type a descriptive Title.
8. Click the account name.
9. Either select an existing dashboard or a new dashboard. If the latter, in the Enter the new Dashboard name
field, type a name for the dashboard.
10. Click OK.
11. Select the new dashboard name and click Save to Dashboard.
The Traffic Overview display shows most of the information you will need to investigate bot attacks and the success of
your defenses against them. In cases where you need more data, it shows you where best to start looking.
The image below shows an example of the Traffic Patterns over Time display.
You can toggle any of the lines on or off by clicking on the title of the line beneath the display.
Pay attention to the relationship between Mitigated requests and Suspicious requests. Toward the left at 08:00 there
is a spike in Suspicious requests which is not matched by a similar spike in Mitigated requests indicating that there
might be an attack for which your configuration is not set up to mitigate.
The image below shows an example of the Site Traffic by Requests over Time display.
This is a more general overview of the traffic to your site over the designated time period. As always, a sinusoidal
cyclic pattern indicates normal human traffic whereas a flat line and/or spikes indicate bot attacks.
Mitigation Actions
You can toggle any of the lines on or off by clicking on the title of the line beneath the display.
For example, if you see captcha mitigations that do not correspond with captcha cleared, that indicates that there is
an attack that is being caught, rather than false positives.
The pie chart to the right is based on the same information, but displays totals for the entire selected time interval,
rather than actions over time. This is for management level analysis that is concerned about about KPI rather than
specific attacks and serves as a value indicator as to the overall effectiveness of the bot protection.
The image below shows an example of the Managed Conditions over Time display.
You can toggle any of the lines on or off by clicking on the title of the line beneath the display.
This display is useful to see if a single Managed Condition has traffic patterns similar to human traffic (i.e. cyclic) which
would indicate false positives. Spikes and sharp troughs indicate bot traffic.
This is easiest to interpret if you are looking at a single site, as in the above example.
If you do find that there are false positives then you can reflect on the amplitude. If the amplitude is small then the
traffic is probably mainly bot traffic, and you may be willing to pay the price of a small number of captchas for real
users to mitigate that bot traffic.
The image below shows an example of the Custom Tags Over Time display.
You can toggle any of the lines on or off by clicking on the title of the line beneath the display.
This display is similar to Managed Conditions over Time, but is instead based on tags that you assign to different
Conditions. This enablesyou to track traffic based on your own breakdown.
if you hover your mouse over the top right corner, an ellipsis appears. Hover your mouse over it and click
Explore from here to drill down into the Looker output of this data. For more information about Looker, refer to the
Looker documentation.
Captcha Trend
You can toggle any of the lines on or off by clicking on the title of the line beneath the display.
This is a very useful display for catching false positives. It shows the relationship between captchas served and
captchas solved. (It also shows failed attempts.) A Captcha Attempt is the sum of successful and failed attempts.
Captcha requests are each time a captcha is served. Hypothetically, if the lines of Captcha Requests and Successful
Attempts are equal, you have 100% false positives. In a real scenario, you may get a tiny percentage of false positives,
but even that may too much to maintain active the mitigation(s) that is/are triggering the captchas. You need to weigh
that against the damage that deactivating the mitigation would cause.
The image below shows an example of the Machine Learning Threats over Time (Apollo Models) display.
You can toggle any of the lines on or off by clicking on the title of the line beneath the display.
Advanced Bot Protection provides machine learning algorithms that try to catch various types of suspicious
behaviors. Unlike the graphs based on mitigations, tags, etc. in which detection is based on a single or a small number
of requests, the Machine Learning Threats over Time (Apollo Models) display is based on hourly processing of huge
jobs, analyzing vast swathes of traffic at once, and then looking for things like coordinated behavior where thousands
of IPs are acting together to accomplish some kind of sinister goal. The algorithms then generate large lists of IP
addresses and distribute them out with tagged traffic placed on the lists that they have generated. This can catch bot
attacks that evade other means of detection. This is detection based not on "who you are" but rather "what are you
doing" - behavior- rather than appearance-based detection.
Name Description
Tags active IPs which have persisted on a site sending requests over a significant and
frequent_flyer anomalous fraction of the last 24 hours. Targets behavior of IPs that are coming back
over and over.
Name Description
Tags IPs which heavily hit a single URL over the last hour. Targets behavior of abusive
heavy_scraper
volume generators.
Tags active IPs which have generated a very significant volume of traffic during the
high_volume_day
last 24 hours. Targets behavior of abusive volume generators.
Tags IPs which change identifiers, cookies, and tokens in a manner which is
id_ratio_zrt inconsistent with how real single users and proxies/gateways change identifiers over
time. Targets the programmatic use of a large number of identifiers from a single IP.
Tags clusters of high volume IPs closely linked in request frequency, as well as shared
id_ratio_zzr
distribution of platform flags. Targets behavior of IP-distributed activity.
Tags active IPs with traffic patterns that are mostly piecewise flat during the last 24
mesas hours, indicating plateaus of consistent volumes of traffic. Targets behavior of
programmatic request generation.
Tags IPs which have persisted on a site without ever responding to a postback
missing_gen_zid challenge and generating a valid Identifier - ZID. Targets behavior of identification
evasion.
Tags IPs which have persisted on a site, but have not been observed to utilize an
Identifier - ZID (though they may be generating ZIDs). Targets behavior of identifier
missing_util_zid
abuse. The model can be used to identify anomalous behavior from automation or
infrastructure quirks.
Tags IPs which have persisted on a site without ever responding to a postback
no_gen_requests
challenge. Targets behavior of identification evasion.
Tags IPs which change their user agent string and/or various other HTTP headers very
uas_or_pid_churn often, but do not exhibit normal or expected identifier, cookie or token changes.
Targets behavior of identifier manipulation.
Name Description
Tags IPs which have requested a very large number of unique URLs during the last
wide_scraper
hour. Targets behavior of site-indexing and crawler-like activity.
You can filter each display by Access Date and Site by using the filter drop-down fields on the top left.
If you click the ellipsis button on the right, you can access the following features:
Name Description
Enabling Protection -
Tools for analyzing the traffic associated with a specific condition before it is enabled.
Condition Analysis
Pages per session Tools for helping you pick an appropriate threshold for the Requests per Session
Exceeded Rate Limiting parameter.
Tools for helping you pick an appropriate threshold for the Session Length Rate
Session Length Exceeded
Limiting parameter.
Aggregator User Agents Information about clients triggering the Aggregator User Agents Condition.
Bad User Agents Information about clients triggering the Bad User Agents Condition.
Name Description
Tools for investigating a single client, for instance when you want to understand why
Investigation Dashboard
someone triggered the captcha or block Actions.
Explore Connector
Direct access to the aggregated access log (compiled globally across all regions).
Aggregated Access Log
The number of bots vs humans are being served Captchas, and how many are solving
Captcha Effectiveness
or failing them, by the rule or condition that triggered them.
The Usage Dashboard enables you to view the number of requests that are processed by Advanced Bot Protection.
Note: This dashboard applies only if you are billed per request. If you are billed per bandwidth,
please look at your CloudWAF usage report.
This dashboard does not display your entitled amount of requests. Check your subscription page
for details.
You can filter the displayed usage requests by access date and by Site (Website).
▪ Click the first drop down list to select an operator for a range of site names to include or exclude.
▪ Type in a name for the value.
3. Click Run.
You can export Advanced Bot Protection's dashboard data to a near real time SIEM. This enables you to leverage to
analytical tools of your favourite SIEM.
When you add a Website Group, the Default Policy is applied to it.
The Default Policy is designed to provide a powerful, basic protection against the majority of bot attacks.
It can be edited and configured to suit your needs as the bot attacks evolve over time.
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Verify that the Website Groups tab is selected.
3. Select the Website Group whose Default Policy you wish to access. The Website Group Configuration window
appears.
A Policy is a set of rules that govern the way Advanced Bot Protection handles requests to the protected Path.
The Default Policy is automatically applied whenever a new Website Group is created.
The Default Policy is applied to the entire Website Group, but you can assign it, or any other Policies you create, to
specific Paths in your Websites, so that your bot protection strategies can be tailored to the precise nature of the
pages being protected. For more information, see Configuring per-Path Policies.
A Directive is a container of one or more Conditions. Each Directive is defined by an Action and contains the
Conditions that, when met by a monitored HTTP request, trigger that Action.
A Policy consists of a group of Directives that have a particular order. The Directives define the rules, and the order in
which they appear defines which rules are actually applied in practice.
A Condition is a container of rules, composed of Flags or code, against which the content of incoming requests are
checked. If a match is found, then that Condition has been triggered and its Directive may be acted on.
A request has many characteristics as a function of the array of data that it contains. This data may be part of the
request header or the Advanced Bot Protection token. It is possible that one characteristic will match a Condition in
one Directive, while another characteristic will match a Condition in another Directive. In that case, the Directive that
is actually applied is the one that is higher in the order within the Policy.
For example, a request from a "good" bot, say, a search engine, has typical bot characteristics that might ordinarily
get it blocked. However, if the "good bot" characteristic itself is matched by a Condition in the Allow Directive, and if
the Allow Directive is higher up than the Block directive, then the matched Condition in the higher-placed Directive is
the Decider and the request is allowed.
There is a set of out-of-the-box Conditions that you can use. Some of these are Managed Conditions which cannot be
changed. Other Conditions have parameters whose values you can edit. You can also create your own Custom
Conditions.
You can assign different Policies to different Paths or groups of Paths in your Website Groups or Websites, as explained
in the following sections.
A Path is a location or group of locations, within a Website, that is defined by a URL path or by a regular expression
that specifies characteristics of the page or pages.
Because of the varied nature of Paths and web pages in a website, there are some Paths for which you want to apply
one Policy, and there are other Paths for which you want to apply a different Policy, or indeed no Policy at all.
This is best understood with an example. Imagine a website that has some pages that have a large number of images
and nothing else. But elsewhere on the same website, there is a login page. And yet in another area of the website,
there are pages that contain the prices of the goods or services that your Website offers.
For the login page, you may want to apply a Policy that protects against brute-force type bot attacks.
For the prices pages, you may want to apply a Policy that protects against price-scraping bot attacks.
And for the pages that just show images, you may want to apply no Policy at all.
You can configure Advanced Bot Protection such that for any Path or set of Paths, you can assign a particular Policy.
Advanced Bot Protection checks for Path matching, starting at the top of the list of Paths. Therefore more specific
Paths should be at higher up on the list, and the most general Path should be at the bottom.
You can define Paths as they appear in the hierarchical page structure on your Website, or you can use a regular
expression.
The default configuration already contains two Paths that illustrate all of the above concepts.
• There are two per-Path Policy assignments in the default configuration. The user has not added any per-Path
Policies.
• The first per-Path Policy assignment is a regular expression that defines images and pages that have other
extensions that do not really need bot protection, like javascript and css. Hence the word "matches."
• For the first per-Path Policy assignment, no Policies have been assigned.
• The second per-Path Policy assignment defines all the paths in the website.
• The second per-Path Policy assignment has the Default Policy assigned to it.
• Since a request to an image file would be matched by the higher of the two per-Path Policy assignments, which
has No Policy assigned to it, no Policy would be applied to a request to an image file.
• Requests to other pages in the website are dealt with by the Default Policy.
• Each path has a Rate Limiting value, which defines how requests to that path count against Rate Limiting. For
more information, see Understanding per-Path Policies and Rate Limiting.
The concept of per-Path Policies and Rate Limiting is best understood with an example.
Imagine you have a website with a selection of books for sale: acmebooks.com. Initially your Website just has some
pages of a general nature about your books, your company, and some pages with explanations as to how to order via
email.
You decide to use Advanced Bot Protection to mitigate bot attacks. You are concerned about bots that send requests
repeatedly to your site's pages, so you use a Policy that has a "lenient" Rate limiting Condition - one that is set up to
serve a captcha if the request rate exceeds more than 12 requests per minute. Such a setup can be illustrated
conceptually with the diagram below:
It is important to understand how this works. Every time there is a request to a page in the Path starting with / (i.e. any
page in the site), the number of requests at the counter is incremented by 1, and the Policy checks to see if the RPM at
the counter has exceeded its limit of 12. If it has, it instructs a captcha to be served. There is one single counter that
is counting all the requests to every page in the site, and it is that total that the Policy reads.
Now imagine that you add a search page to your Website. Your search page uses a machine-readable text field to offer
suggestions each time the user types in a character. You decide that the search page has no value for a bot attack and
so you decide not to assign any Policy to the search page's Path. So that Path has no Policy assigned. This is illustrated
in the diagram below.
But now there is a problem. Since a machine-readable text field generates a request each time the user types in a
character, the expected RPM rate from such a page when in use can be 50 or more RPMs. The problem is that all these
requests are being counted at the one-and-only counter used by the Website. So if a legitimate user types in a search
and then immediately accesses one of the general pages, the rate limit Policy on that general page will refer to the
counter and will find that its limit of 12 RPM has been far exceeded, and it will serve a captcha! Even though the use
was totally legitimate and expected!
You solve this problem by configuring the Rate Limiting value in the per-Path Policy Assignment window (not to be
confused with the Rate limiting Condition inside a Policy). You configure this value to None, in the Assign Policy
window. (For more information, see Creating a New per-Path Policy Assignment.)
The effect of this is to discard all request counts from requests to that Path and is illustrated in the diagram below.
So now, the high number of requests to the machine-readable text field pages are not counted at all, and the rate limit
policy on the general pages works as it should.
This appears in the Advanced Bot Protection UI as in the image below. This is precisely how the Default Policy is
constructed.
Now imagine that you add a login page to your website. Your login page is sensitive to bot attacks that try to steal
account credentials and is thus a high-value target. (For more information, see Advanced Bot Protection Use Cases
and Best Practices.) One of the bot attack mitigations you want to use is also based on rate limiting, but you want a
more stringent RPM value for this more sensitive page, say 5 RPM.
But now you have a similar problem to the one you had earlier. Requests to both Paths are being counted by the one
and only counter that the site uses. This means that, just like before, if there is legitimate use of the general pages up
to the 12 RPM limit on that path but higher than the 5 RPM for the login page Path, any requests to the login will
exceed the login page Path's rate limit and a captcha will be served.
You solve this problem by assigning a custom scope Rate Limit to that Path. You do this by configuring the Rate
Limiting value in the per-Path Policy Assignment window to Rate limiting by custom scope, and giving that
custom scope a name, in this example: login).
By assigning a custom scope Rate Limit to a path, you are in effect creating a separate request counter for that Path.
This is illustrated in the diagram below.
Now, requests to the general pages are counted by the overall site counter at the bottom, but requests to the login
page are counted by the counter defined by the custom scope login. A request is counted by one counter or the other,
never both. So the stringent rate limit policy refers to the counter defined by the custom scope login and that counter
is not incremented by requests to the general pages. So no undeserved captchas will be served.
The result appears in the Advanced Bot Protection UI as in the image below:
In summary, there are three Rate Limiting options for any Path:
• Per Website: This is the default option. Requests to all Paths in this Website Group are totaled. A Rate Limiting
Condition will use that total, even if that Condition is in a Policy that is assigned to a different Path.
• Rate limit per custom scope: By selecting this option and entering a text string in the field, you define a Custom
Scope for this Path. Requests to this Path (and other Paths with the same Custom Scope) are totaled separately
from requests elsewhere. So you can make sure that requests to a Path where high request rates are legitimate
(like pages with images) will not activate a block or captcha on requests to a Path where high request rates are
suspicious (like a login page).
Enter a text string to define the Custom Scope for this Path.
• No Rate Limiting: Requests to this path are not counted against the rate limit anywhere in the site - including
this path. The request counts are simply discarded.
Note: It should be clear from the above explanation that Rate Limiting and Custom Scope, as
applied to a Path, are a completely separate entity from the Policy assigned to that path. Rate
Limiting and Custom Scope refer to how requests to that Path are counted: they are either
accumulated together with requests to other Paths (default), or accumulated only with requests for
this Path (and any others assigned the same Custom Scope), or not accumulated at all (no Rate
Limiting).
A Policy assigned to a Path defines the actions that will get triggered by requests made to that Path
when the Policy's Conditions are met.
When you assign Advanced Bot Protection to protect a new Website, Advanced Bot Protection identifies and displays
those paths in your Website that have pages that have API calls in them.
The following Directives do not work with pages that have API calls in them: captcha and identify.
It is strongly recommended that you configure a per-Path Policy for any path that is shown as having an API call, and
that those per-Path Policies' captcha and identify Directives be empty of Conditions.
To see the paths in your website that have API calls in them:
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Verify that the Website Groups tab is selected.
3. Click on a Website Group. The Website Group Configuration window appears.
4. Click Edit per-Path Policies. The per-Path Policy Assignments window appears.
5. If Advanced Bot Protections detects that the Website Group has endpoints, the API paths action
recommendation box appears. Click on it. The list of API paths that contain API calls appears.
6. Clicking the relevant Assign new per-Path Policy link for a Path. The Assign per-Path Policy dialog box
appears.
7. Make your selections and/or enter values according to the table below. Note that you should assign Policies
whose captcha and identify Directives are empty of disabled Conditions so that those Directives are never
activated.
8. Click the Assign this Policy button.
9. Repeat steps 6 - 8 until you have assigned appropriate per-Path Policies to all the Paths that have API calls.
Name Description
• Path Prefix Match: Allows you to set the prefix that defines the Paths to which
you will assign a Policy.
• Path Regex Match: Allows you to enter a regular expression that defines the
Paths to which you will assign a Policy.
Type • Javascript Challenge: Assigns the Policy to all requests that are from
Javascript.
• iOS Challenge: Assigns the Policy to all requests that are from Apple iOS
machines. This covers bot threats that are unique to the Apple iOS.
• Android Challenge: Assigns the Policy to all requests that are from Android
machines. This covers bot threats that are unique to the Android operating
system.
Type the Path prefix that defines the Path(s) to which the Policy is assigned. Appears
Path Prefix
if you selected the Path Prefix Match Type.
Type the regular expression that defines the Path(s) to which the Policy is assigned.
Path Regex
Appears if you selected the Path Regex Match Type.
Policy Select the Policy you wish to apply to the defined Path(s).
• Per Website: This is the default option. Requests to any path in this Website
Group are totalled. A Rate Limiting Condition will use that total, even if that
Condition is in a Policy that is assigned to a different Path.
Rate Limiting
• Rate Limit per Custom Scope: Setting up a Custom Scope for a particular Path
means that requests to this Path (and other Paths with the same Custom
Scope) are totalled separately from requests elsewhere. This means you can
make sure that requests to a Path where high request rates are legitimate (like
pages with images) will not activate a block or captcha on requests to a Path
where high request rates are suspicious (like a login page).
Name Description
• No Rate Limiting: Requests to this Path do not count at all against Rate
Limiting Conditions in any Policies that apply anywhere in the Website. The
request counts are simply discarded.
Choose between:
• Use the default website group values: For this particular per-Path Policy, use
the default Website Group values. For more information, see Editing a Website
Group - Default Rate Limiting Values
• Custom values: Type your own values for each of the the three rate limiting
Rate Limiting Values
parameters:
For more information on Paths, see Path and Understanding per-Path Policies.
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Verify that the Website Groups tab is selected.
3. Click on a Website Group. The Website Group Configuration window appears.
4. Click Edit per-Path Policies. The per-Path Policy Assignments window appears.
5. Hover your mouse over the assignment you wish to edit and click the Edit per-Path Policy Assignment icon
on the right. The Assign Policy window appears.
6. Make your selections and/or enter values according to the table below.
7. Click the Assign this Policy button.
Name Description
• Path Prefix Match: Allows you to set the prefix that defines the Paths to which
you will assign a Policy.
• Path Regex Match: Allows you to enter a regular expression that defines the
Paths to which you will assign a Policy.
Type • Javascript Challenge: Assigns the Policy to all requests that are from
Javascript.
• iOS Challenge: Assigns the Policy to all requests that are from Apple iOS
machines. This covers bot threats that are unique to the Apple iOS.
• Android Challenge: Assigns the Policy to all requests that are from Android
machines. This covers bot threats that are unique to the Android operating
system.
Type the Path prefix that defines the Path(s) to which the Policy is assigned. Appears
Path Prefix
if you selected the Path Prefix Match Type.
Type the regular expression that defines the Path(s) to which the Policy is assigned.
Path Regex
Appears if you selected the Path Regex Match Type.
Policy Select the Policy you wish to apply to the defined Path(s).
• Per Website: This is the default option. Requests to any path in this Website
Group are totalled. A Rate Limiting Condition will use that total, even if that
Condition is in a Policy that is assigned to a different Path.
• Rate Limit per Custom Scope: Setting up a Custom Scope for a particular Path
Rate Limiting means that requests to this Path (and other Paths with the same Custom
Scope) are totalled separately from requests elsewhere. So you can make sure
that requests to a Path where high request rates are legitimate (like pages with
images) will not activate a block or captcha on requests to a Path where high
request rates are suspicious (like a login page).
• No Rate Limiting: Requests to this Path do not count at all against Rate
Limiting Conditions in any Policies that apply anywhere in the Website. The
request counts are simply discarded.
Name Description
Choose between:
• Use the default website group values: For this particular per-Path Policy, use
the default Website Group values. For more information, see Editing a Website
Group - Default Rate Limiting Values.
• Custom values: Type your own values for each of the the three rate limiting
Rate Limiting Values
parameters:
You may want to delete a per-Path Policy assignment that is no longer used.
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Verify that the Website Groups tab is selected.
3. Click on a Website Group. The Website Group Configuration window appears.
4. Click Edit per-path Policies. The per-Path Policy Assignments window appears.
5. Hover your mouse over the assignment you wish to edit and click the Delete button. The confirmation dialog
box appears.
6. Click OK.
You can create a new per-Path Policy assignment at any time, as part of your response to evolving bot attacks.
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Verify that the Website Groups tab is selected.
3. Click on a Website Group. The Website Group Configuration window appears.
4. Click Edit per-path Policies. The per-Path Policy Assignments window appears.
5. Click the Assign new Per-Path Policy button. The Assign Per-path Policy window appears at the top.
6. Make your selections and/or enter values according to the table below.
7. Click the Assign this Policy button.
Name Description
• Path Prefix Match: Allows you to set the prefix that defines the Paths to which
you will assign a Policy.
• Path Regex Match: Allows you to enter a regular expression that defines the
Paths to which you will assign a Policy.
Type • Javascript Challenge: Assigns the Policy to all requests that are from
Javascript.
• iOS Challenge: Assigns the Policy to all requests that are from Apple iOS
machines. This covers bot threats that are unique to the Apple iOS.
• Android Challenge: Assigns the Policy to all requests that are from Android
machines. This covers bot threats that are unique to the Android operating
system.
Type the Path prefix that defines the Path(s) to which the Policy is assigned. Appears
Path Prefix
if you selected the Path Prefix Match Type.
Type the regular expression that defines the Path(s) to which the Policy is assigned.
Path Regex
Appears if you selected the Path Regex Match Type.
Policy Select the Policy you wish to apply to the defined Path(s).
• Per Website: This is the default option. Requests to any path in this Website
Group are totalled. A Rate Limiting Condition will use that total, even if that
Condition is in a Policy that is assigned to a different Path.
• Rate Limit per Custom Scope: Setting up a Custom Scope for a particular Path
Rate Limiting means that requests to this Path (and other Paths with the same Custom
Scope) are totalled separately from requests elsewhere. So you can make sure
that requests to a Path where high request rates are legitimate (like pages with
images) will not activate a block or captcha on requests to a Path where high
request rates are suspicious (like a login page).
• No Rate Limiting: Requests to this Path do not count at all against Rate
Limiting Conditions in any Policies that apply anywhere in the Website. The
request counts are simply discarded.
Name Description
Choose between:
• Use the default website group values: For this particular per-Path Policy, use
the default Website Group values. For more information, see Editing a Website
Group - Default Rate Limiting Values.
• Custom values: Type your own values for each of the the three rate limiting
Rate Limiting Values
parameters:
Managing Policies
When you create a new Policy, you must decide if it will be based on Standard Directives, or Custom Directives.
• Standard Directives: A Policy with Standard Directives has the six Directives provided by Imperva, in the
recommended order.
• Custom Directives: When you create a Policy with Custom Directives, you can reorder and/or delete the
Directives, or add new Directives with names of your choosing. For more information, see Adding and
Reordering Directives.
Any newly created Policy has no Conditions in any of its Directives. You must add these yourself. For more
information, see Managing Policy Directives and their Conditions.
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Select the Policies tab. The Policies window appears.
3. Click the Create New Policy button . The Create Policy window appears.
If you select Custom Directives, the Directives appear in the dialog box. You can reorder the Directives and add
Directives.
Cloning a Policy
You can create a new policy from an existing one by cloning an existing policy.
This is particularly useful when you want to start testing variations of the Default Policy.
To clone a policy:
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Select the Policies tab. The Policies window appears.
3. Click the Clone Policy button for the Policy you wish to rename. The Clone Policy dialog box appears.
Renaming a Policy
To rename a Policy:
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Select the Policies tab. The Policies window appears.
3. Click the Rename Policy button for the Policy you wish to rename. The Policy Name becomes editable.
4. Type the new Name.
Deleting a Policy
To delete a policy:
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Select the Policies tab. The Policies window appears.
3. Click the Delete Policy button for the Policy you wish to delete.
4. Click OK in the confirmation dialog box.
The following sections describe the various ways you can manage Directives and Conditions to get the best results
from Advanced Bot Protection.
Directives are defined by the Actions of their title, and contain one or more Conditions.
Advanced Bot Protection provides six out-of-the-box Directives. They are summarized in the Directives table below.
A Condition is a container of rules, composed of Flags or code, against which the content of incoming requests are
checked. If a match is found, then that Condition has been triggered and its Directive may be acted on.
If data in a request matches one of the Conditions in a Directive, then the Directive is activated and its Action taken.
However, the order of the Directives in a Policy is critical. If data in a request matches a Condition in one Directive, and
other data in the same request matches a different Condition in another Directive, then it is the higher Directive that is
activated and any matches to Conditions in lower Directives are ignored - they may be logged, but their Actions are
not carried out. For more information, see Understanding the Structure of the Policies and the Default Policy.
Edit the parameters and/or the Flags of certain Understanding and Editing
Edit a Condition
types of Conditions. Conditions
Directives
When the data in a request matches a Condition in a Directive, that Directive's Action is activated.
• Existing Condition: This is either a Managed Condition - an Advanced Bot Protection out-of-the-box Condition
that you cannot edit - or any other Condition that you are already using in your account.
• New Condition: A condition that you need to create, based on a given template, or a custom Condition.
A summary of the Existing Conditions is presented in the table below. For a summary of the New Condition templates,
see Creating a New Condition.
1. Access the Policy in which you wish to insert a Condition to a Directive, in either of the following ways:
Access the Default Policy. For more information, see Accessing the Default Policy.
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Select the Policies tab. The Policies window appears.
3. Select a Policy. The Policy Details window opens for that Policy.
2. Click the Insert Condition button by the Directive to which you wish to add the Condition. The Insert
Condition dialog box appears.
1. Click Create by the Condition template you wish to use to create your Condition. The Create Condition
dialog box opens.
The Create Condition dialog box is different for each new Condition template.
2. Type in the data for your particular Condition. For more information, see Creating a New Condition.
3. Click Save.
Existing Conditions
Search engines The request comes from a search engine crawler. Allow
Known violator data Malicious data center IPs seen across Imperva’s
Block or captcha
centers entire network.
Bad user agents Standard checks for invalid user agents. Block or captcha
Browser
Standard checks on the validity of the postback
environment Block or captcha
where a failure is usually indicative of tampering.
anomalies
Aggregator user
User agents of known crawlers. Block or captcha
agents
A Condition's Status can be one of the following. You can toggle between them at any time:
• Active: A request with data that matched the Condition causes that Directive to be activated.
• Passive: A request with data that matched the Condition does not cause that Directive to be activated, but the
match is logged.
• Disabled: A request with data that matched the Condition does not cause that Directive to be activated, and the
match is not logged.
Access the Default Policy. For more information, see Accessing the Default Policy.
or
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Select the Policies tab. The Policies window appears.
3. Select a Policy. The Policy Details window opens for that Policy.
4. Click on the Condition whose status you wish to configure. The Condition's Flags and functionality
buttons appear.
You can assign Tags to a Condition. Tags are used for monitoring as there are graphs that show which Conditions are
activated based on their tags. This allows the grouping of Conditions in meaningful ways for monitoring.
Access the Default Policy. For more information, see Accessing the Default Policy.
or
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Select the Policies tab. The Policies window appears.
3. Select a Policy. The Policy Details window opens for that Policy.
2. Click on the Condition whose Tags you wish to edit. The Condition's Flags and functionality buttons appear.
To add a Tag, type in the name of the Tag. You can add more than one Tag, separating them by commas.
5. Click Save.
There are three types of Conditions and these types affect your editing capabilities for those Conditions.
• Managed Conditions: Prepackaged Advanced Bot Protection Conditions that you cannot edit directly. The code
is managed by Imperva to ensure optimal efficacy as new bot threats emerge.
• Condition Template: A set of prepackaged Advanced Bot Protection Condition templates, each of which has
parameters that you can edit. However, like the Managed Conditions, you cannot edit their code directly, but
must create a Custom Condition, copy over the coded Flags, and modify the Flags there. Again, you are
effectively creating a new Condition based on the non-Managed Condition.
• Custom Conditions: A Condition Template that does allow you to edit the Flags directly. You can use a Custom
condition to create a Condition from scratch, or to create a new Condition based on an existing Condition, by
copying over the existing Condition's code and modifying it.
When you create a Custom Condition from scratch you can use Flags and/or code.
• Flags: A Flag is a prepackaged test or rule that looks for a specific data item in the incoming request. The
majority of Conditions, and indeed all of the Conditions in the Default Policy, are made of up Flags only. To
create your own Condition that uses one Flag only, use the Flags Condition Template. To create your own
Condition that uses more than one Flag, use the Custom Conditions Template, but there you can also use code
as explained below. When you select the Flags Condition Template, you get access to full documentation for all
the Flags.
• Code: Advanced Bot Protection offers you the use of a proprietary language called Moi to create your own
Conditions. You must use the Custom Conditions Template to create a Condition that is based on Moi code. (It
can also contain Flags and/or Properties.)
• Properties: Moi uses a set of Properties whose values that can be matched by an incoming request. To create
your own Condition that uses one Property only, use the Property Field Condition Template. To create your
own Condition that uses more than one Property, use the Custom Conditions Template, but there you can also
use code as explained below. When you select the Property Field Condition Template, you get access to full
documentation for all the Properties.
You can edit the Tags for any type of Condition. For more information, see Editing a Condition's Tags.
Access the Default Policy. For more information, see Accessing the Default Policy.
or
4. Select a Policy. The Policy Details window opens for that Policy.
2. Click on the Condition whose parameters you wish to edit. The Condition's Flags and functionality buttons
appear.
4. Enter values for the parameters. The exact parameters are different for each Condition. For more information,
see the table in Adding a New Condition.
5. Click Save.
Access the Default Policy. For more information, see Accessing the Default Policy.
or
3. Enter values for the Name and Description parameters. In the Code text field, enter the Flags you wish to make
up your Condition.
4. Click Save.
Note: You can also edit a non-Managed or Custom Condition from the Conditions tab. For more
information, see Managing Conditions.
You can move a Condition from one Directive to another. This is useful, for example, when you have checked a
Condition for false positives in the captcha Directive, and now you want to move it to the block Directive.
Access the Default Policy. For more information, see Accessing the Default Policy.
or
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Select the Policies tab. The Policies window appears.
3. Select a Policy. The Policy Details window appears for that Policy.
2. Click on the Condition you wish to move. The Condition's Flags and functionality buttons appear.
4. Select the Target Directive (to where you want to move the Condition).
5. Click Move.
You can add Directives to a Policy and change the order of the Directives in a Policy.
You can perform these actions only for Policies that you yourself have created. You cannot add and/or reorder
Directives in the Default Policy.
To add/reorder Directives:
Access the Default Policy. For more information, see Accessing the Default Policy.
or
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Select the Policies tab. The Policies window appears.
3. Select a Policy. The Policy Details window appears for that Policy.
3. To reorder Directives:
▪ Click on a Directive and drag-and-drop it to the desired place in the list.
To add a Directive:
Managing Conditions
You can configure Conditions individually. Or you can configure Conditions together, as a group. For this latter task,
create a Condition Group of the Conditions you wish to configure together.
The Conditions tab enables you to view all the Conditions in your account, edit them, delete them, analyze the
matches they generate, and create new ones. It also shows which Policies are using each Condition.
When you add a new Condition in the Conditions tab, you are not immediately placing it in a Directive. You are simply
creating a Condition for future use.
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Select the Conditions tab. The Conditions window appears.
3. Click the Add New Condition button. The Add New Condition window appears.
1. Click Create beside the template of the Condition you wish to add. The Create Condition dialog box
opens.
The Create Condition dialog box is different for each new Condition template.
4. Type in the data for your particular Condition. For more information, see the table below.
5. Click Save.
New Conditions
IP Set IP Addresses A list of IPv4, IPv6 or CIDR patterns that can be mixed freely.
The regular expression that should match the value of the specified
Header pattern
header.
Tag Tag The name of the tag that should be present in the token.
Select from one of the Flags that is in your account. You can only
select one Flag.
Flag Flag
You can access full documentation of the Flags from this option.
Compound Rate
Requests per Minute Maximum number of requests per minute allowed.
Limiting
Requests without
Maximum number of requests allowed without a token.
Token
Identify eventually
Requests with Expired
Maximum number of requests allowed with an expired token
Token
A Condition Group is a group of Conditions under a single name. By using a Condition Group, you can manipulate a
large number of Conditions that you normally use together, all at once: adding them to Directives, moving them from
Directive to Directive, configuring their status, etc.
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Select the Conditions tab. The Conditions window appears.
3. Click the Add New Condition Group button. The Create Condition Group window appears.
6. By the Condition you wish to add, click Insert. The Condition appears in the Condition Group, together with the
Condition's Flags and functionality buttons .
This enables you to configure the Condition. Changes you make to the Condition apply to it within the
Condition Group only.
7. Repeat the above step for each Condition you wish to add.
8. Click Save.
1. In Advanced Bot Protection, verify that the Settings menu item is selected
2. Select the Conditions tab. The Conditions window appears.
3. Click the Delete button by the Condition or Condition Group you wish to delete. The confirmation dialog box
appears.
4. Click OK.
You can also add a new Website to a Website Group, rename, or delete any Website Group.
Note: If you are using a Connector instead of CloudWAF, use the procedure in Adding a Website to a
Website Group - Using a Connector.
When you add a Website, you can also define a Cookie Scope for that Website and related Websites.
When you add a Website, a cookie is created for that Website's Path, for example. www.example.com. However, by
default it is only visible there. You can use the Cookie Scope to expand the coverage of a cookie set up for one Website.
Set a path in Cookie Scope to define other paths that can use the same cookie. Note that his can only go as far as the
apex domain and all its subdomains.
e.g. if you want the cookie to be visible for aaa.example.com and bbb.example.com type example.com in the
Cookie Scope.
Note: If you do not set the cookie scope, the domain for the cookie will be empty. Due to
inconsistent browser handling of cookies with no domain or an empty domain, it is strongly
recommended that you do not leave the cookie scope empty.
1. In CloudWAF, add the website that you wish to protect. For more information, see Onboarding a Site – Web
Protection and CDN.
2. In Advanced Bot Protection, verify that the Settings menu item is selected.
3. Verify that the Website Groups tab is selected.
4. Select the Website Group whose Default Policy you wish to access. The Website Group Configuration window
appears.
If you are subscribed to CloudWAF only, the Add Website dialog box appears as follows:
If you are subscribed to both CloudWAF and Connectors, the Add Website dialog box appears as follows:
When you add a Website, you can also define a Cookie Scope for that Website and related Websites.
When you add a Website, a cookie is created for that Website's Path, for example. www.example.com. However, by
default it is only visible there. You can use the Cookie Scope to expand the coverage of a cookie set up for one Website.
Set a path in Cookie Scope to define other paths that can use the same cookie. Note that his can only go as far as the
apex domain and all its subdomains.
e.g. if you want the cookie to be visible for aaa.example.com and bbb.example.com type example.com in the
Cookie Scope.
Note: If you do not set the cookie scope, the domain for the cookie will be empty. Due to
inconsistent browser handling of cookies with no domain or an empty domain, it is strongly
recommended that you do not leave the cookie scope empty.
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Verify that the Website Groups tab is selected.
3. Select the Website Group whose Default Policy you wish to access. The Website Group Configuration window
appears.
If you are subscribed to Connectors only, the Add Website dialog box appears as follows:
If you are subscribed to both CloudWAF and Connectors, the Add Website dialog box appears as follows:
5. If you are subscribed to CloudWAF and Connectors, verify that the Connectors option is selected.
6. Add a Website or a number of Websites as follows:
▪ If you want to add a single Website, select Exact match, and then type the Domain Name (FQDN).
▪ If you want to add Websites with the same prefix, select Prefix match and then type the Domain Prefix
(this field appears when you make this selection).
▪ If you want to add Websites with the same suffix, select Suffix match and then type the Domain Suffix
(this field appears when you make this selection).
This can be used for groups of social websites, where a name precedes a common suffix and you want to
add all those Websites. For example, to include both bob.fancypage.com and
jim.fancypage.com, you would type fancypage.com.
• Editing the Default Policy: For more information, see Working with the Default Policy.
• Creating a per-Path Policy Assignment: For more information, see Creating a New per-Path Policy Assignment.
• Adding a Website: For more information, see Adding a Website.
• Editing the default Rate Limiting values: For more information, see below.
A Website Group has default Rate Limiting values that are applied by default to all the per-Path Policies in that
Website Group that that use Rate Limiting.
If you want a certain per-Path Policy to have values that are different from the default, you can configure them so. For
more information, see Configuring per-Path Policies for Endpoints with API Calls, Editing a per-Path Policy
Assignment, and Creating a New per-Path Policy Assignment.
You can define the default Rate Limiting values for any Website Group.
1. In Advanced Bot Protection, verify that the Settings menu item is selected.
2. Verify that the Website Groups tab is selected.
3. Select the Website Group whose Default Policy you wish to access. The Website Group Configuration window
appears.
4. Under Default Rate Limiting Values, type in the values you want based on the table below.
5. Click Save.
Name Description
The maximum number of requests to the site in a minute that is allowable before rate
Max requests per minute
limiting is triggered.
The maximum number of requests to the site in a single session that is allowable
Max requests per session
before rate limiting is triggered.
The maximum length of a session that is allowable before rate limiting is triggered.
Max session length
Select the time units from the adjacent drop-down list.
Editing a Website
You can edit the Cookie Scope and any of the Advanced Configuration options of a Website.
To edit a Website:
5. Under Websites, click the Website you want to edit. The Edit Website window appears.
6. Make your edits. If necessary, expand the Advanced Settings. For more information, see Cookie Scope and
Understanding the Website Advanced Settings.
7. Click Save.
Each of the advanced settings parameters is described in the topics below. Some of the advanced settings apply to
CloudWAF only. Others apply to Connectors only. And yet others apply to both.
The key used to encrypt the token. By default a single Encryption Key is assigned per Website Group but here you can
assign a different, unique Encryption Key to a particular Website. The drop down list displays all the Encryption Keys
added to that Website Group so far.
This is useful if you are managing security for different Websites and you do not want them to share an encryption
key.
The drop down list offers an account default encryption key. It is highly recommended that you use the account
default encryption key so that the token can be shared across all Website Groups in an account.
• Only Websites that have the same key can share tokens. Note other restrictions here: Cookie Scope.
• Website Groups created after this release have the default key but Website Groups created before this release
retain their original non-default encryption keys until you change them.
• You can assign the account default key to CloudWAF websites.
The AWS data region where you would like to save data. This is for compliance purposes. This appears in both
CloudWAF and Connector, but is only configurable in Connector.
For CloudWAF, the default Data Region for a Website Group is determined by the data region in your CloudWAF
settings.
For Connector, the default Data Region for a Website Group is United States. When you add another Website, you can
set the Data Region. If you want to change the Data Region for the original Website, edit that Website. For more
information, see Editing a Website.
Controls how Advanced Bot Protection determines the IP of the end user for challenge requests.
• Header Name: If no name is specified, the IP as seen by Advanced Bot Protection is used.
• Reverse Index: If there are multiple, comma-separated IP addresses specified, this specified the zero (0)
specified IP to select from the end of the list.
Controls how Advanced Bot Protection determines the IP of the end user on analysis requests.
Enter a list of header names whose content you want CloudWAF to send to Advanced Bot Protection without masking
them. These headers can then be used as identifiers within conditions.
The header names are case-insensitive (as per the HTTP standard) but the entered capitalization is preserved.
You can configure up to two different SameSite cookies to store the Advanced Bot Protection token.
The secondary cookie exists for exceptional cases. Since using it doubles the amount of cookie data sent on each
request it is recommend that you use only one cookie unless two are necessary.
See the SameSite cookies documentation on the Mozilla website for more information.
Note: The cookie mode options are only supported for the following:
• CloudWAF
• Cloudflare connector version 1.21.0 and later
• Lambda@Edge connector version 1.20.0 and later
• Fastly connector version 1.1.2 and later
• Nginx/Openresty connector version 0.9.2 and later
This is not configurable, but is display only. This is the path where the mobile SDK can communicate with Advanced
Bot Protection.
You can enter Paths for which you do not want the Javascript tag sent to clients. For more information, see
Understanding How Advanced Bot Protection Handles Traffic.
Select the desired captcha services, if you want one. The options are:
• Geetest: This uses the Imperva CloudWAF captcha keys. Select either Easy, Normal, or Hard.
• Recaptcha v2: You can generate your own free keys from the Recaptcha pages on Google's website.
• Custom Geetest: Use your own keys for Custom Geetest.
4. Click the Rename button by the Website Group you want to rename. The Website Group's Name becomes a
text entry field.
5. Type in the new Name.
6. Hit Enter.
You can delete a Website Group at any time. When you do so, all its Websites are deleted as well.
4. Click the Delete button by the Website Group you wish to delete.
5. Click OK in the confirmation dialog box.
When you create a new website, it is by default given the account default encryption key but you can choose a
different encryption key if you like.
For an existing Website that using CloudWAF, if the Website doesn't have the account default encryption key, you can
configure it so that it does. If the website already used the account default encryption key, you cannot change the
encryption key.
For an existing Website using a Connector, whatever key the Website is using, you can change it. The account default
key is among those offered.
4. Select the Website Group for whose Website you want to configure the encrption key. The Website Group
Configuration window appears.
5. Under Websites, click the Encryption Key icon of the Website whose encryption key you wish to
configure.
▪ If your Website is configured with CloudWAF, the following window appears:
If you want to set your encryption keys to the account default encryption key, click Set encryption keys
to account default.
From the Select a key from this account drop down list, select an encryption key and click the Add
button.
You can use the trash can icon to remove the current encryption key from the list of keys.
Updating a Configuration
When you make changes to your Advanced Bot Protection, these changes do not take place until you review them and
publish them.
If you have made changes, the main windows inAdvanced Bot Protection display this warning at the top right:
To update a configuration:
The Publish Configuration summary displays all the unpublished changes that are pending, together with any
warnings that may be appropriate.
5. Review the changes and click Publish. The changes you have made become operative.
Notes:
• Snapshot and restore functionality is available via the API only. For more information, see
Advanced Bot Protection API.
• A snapshot is valid for 180 days after creation only.
• A snapshot includes:
• Sites (with per-path policies/selectors)
• Domains (with domain token encryption keys)
• Conditions
• Policies (with policy directives)
• A snapshot does not include:
• Your API credentials
• Publish your configuration before making a snapshot in order to make sure that it is working.
• After performing a restore, publish the account so that the restored state takes effect.
• When you use the snapshot feature in combination with Imperva CloudWAF, the restoration
of a snapshot will be refused if one of the CloudWAF websites in the snapshot has been either
deleted or moved to a different account. If you wish to restore the snapshot, you must first
add the website again, or move it back to the original account.
In order to identify web browsers, Advanced Bot Protection uses HTTP metadata checks, browser challenges, machine
learning algorithms, and enforces the presence and validity of a token generated with JavaScript in order to detect the
difference between bots and humans. However, native mobile applications generally communicate via API calls.
Because most native mobile apps do not load web pages and execute JavaScript, they would appear malicious
without a special way to identify them.
Imperva’s Advanced Bot Protection SDK is purpose-built to both identify native mobile applications as well as provide
security controls around their use, much like our web protection mentioned above.
As a result, only real devices with your real mobile applications used by real humans are allowed to interact with your
mobile API endpoints, as well as any traffic you have allow listed.
1. Initialize the SDK's Protection object somewhere near the start of the mobile application: Specify the full
protocol, FQDN, and challenge path (found in your domain advanced settings - For more information, see
Understanding the Website Advanced Settings) in the string passed to the initialization call. Create one
Protection object for every FQDN you are communicating with.
For example, let's assume your application makes requests to www.example.com, api.example.com, and
static.example.com. In this example, assume both www.example.com and api.example.com are protected by
Advanced Bot Protection. However, static.example.com is not. You should create two Protection objects, one for
www.example.com and one for api.example.com.
If you have initialized the Protection object successfully, at application startup the SDK (running inside your
application) automatically carries out the following:
Note: If you are using version 3.x of the SDK, do not supply the challenge path. If
you are using version 2.x of the SDK, you need to supply the challenge path.
Never cache the token locally; the SDK takes care of this for you and automatically refreshes the token if need
be.
Use the initialized object you created in step 1 for the specific FQDN you are communicating with. Do not create
a new object for every HTTP request.
3. Add the token as either a header or cookie: Add the token with name X-D-Token and the value received from
the getToken() function to the HTTP request and send the HTTP request as normal.
Notes:
▪ The SDK includes documentation in the form of an INSTALL.md file as well as documentation
in the zip files.
▪ The SDK includes a minimal application that shows examples of the above process.
Note: there is a debug key available for more verbose testing (please see the included HTML documentation inside the
SDK zip files for more information on the constructor to use with the debug key); however, you should NEVER release
your application with the debug key active.
Test for the following items once the SDK is bundled into your application:
1. Ensure all calls from your mobile app to the protected API have a token appended.
You can view this in the Get Logs for an IP dashboard. If any API calls do not have the token, you will see
no_token in the flags field of this dashboard. You may not be adding the token to all requests on purpose, e.g.
you request an application settings / feature flags object before SDK initialization. Naturally, that request would
not have a token yet, and you should allow list that path in the Advanced Bot Protection console. To allow list an
entire path and exempt it from Advanced Bot Protection, create a new per-path policy and select no policy and
no rate limiting for the values in the per-path policy dialog box. For more information on adding a per-path
policy, see Creating a New per-Path Policy Assignment.
3. Test your app when your policy is in active mode. You should see requests fail to load if you move the debugger/
emulator condition to active while using a debug build or desktop emulator in this test scenario. To ensure the
debugger/emulator condition exists in your policy, contact Imperva support.
You can use one of the following Imperva FTP sites according to your geographical location:
• USA: ftp://ftp-us.imperva.com
• Europe: ftp://ftp-eu.imperva.com
The files are located under /Downloads/ABP SDK. The SDK is separate for iOS and Android. You will need the
respective version of each SDK for each operating system.
On the web version of Advanced Bot Protection, browsers execute the Imperva JavaScript tag on the page like any
other first party JavaScript tag. In many cases, Imperva can add the tag to the page on your behalf without any
changes to your application. However, there is no way for Imperva to ask your native mobile application to perform
any extra challenges because it is a pre-compiled application, there is no way to modify it in real time, and every
application is different. Therefore, your application must include the SDK that generates the token and profiles the
device, and you must add the SDK to your application yourself before your users download it from their respective
app store.
The SDK uses the same communications channel as the mobile app. As long as the Imperva service is online, it will
retrieve the token. If the Imperva service is offline, chances are all traffic is sent directly to the customer’s origin
server. Therefore, the app should not catastrophically fail in the presence of a blank token returned from the SDK, as
the requests will simply go to the origin and the app will work anyway. At worst, the bot protection is actually offline in
this scenario. It is also recommended that you employ a circuit-breaker that backs off from calling getToken() when in
prolonged error state.
The getToken() method can fail to fetch a token due to various reasons such as lack of network connectivity, internal
errors etc. You should catch these standard error types in your application, and the application should handle these
errors with a similar strategy as for the connections to the API server. For example, the application shows a dialog to
the user requesting them to check their connectivity status. Please refer to our sample applications bundled in the
SDK zip files for examples of catching network exceptions.
The exception types are documented through the function signature on Android, and on iOS the errors are in the doc
comment on getToken(). There is also method level documentation in the header file.
What impact does the SDK have on my application in terms of latency and load time?
Because the SDK has a small footprint, load time and memory impact is minimal. In terms of latency, there are only
two extra round trips required for token generation every 10 minutes. The token is then cached locally, and there is no
additional latency impact.
Jailbroken devices are not necessarily bad actors. Jailbroken devices are indicated as such in the logs, but they are
not blocked from accessing API servers by default.
How is the the Advanced Bot Protection SDK actually deployed and what do I need to do from
my side?
The SDK uses the existing Advanced Bot Protection platform and is deployed with those deployment options. You only
need an instance of Advanced Bot Protection protecting your domain, in addition to integration of the SDK library
with your app.
For more information, see Getting Started with Imperva Advanced Bot Protection and Installing the SDK.
Token requests are completely transparent and are handled by the Advanced Bot Protection SDK when your code calls
the getToken() function.
The SDK requests a new token every ten (10) minutes to prevent each API request from requesting a token. Therefore,
there is no latency — unless an API request happens when a new token is required. The entire challenge<>response
and token request process takes well below one (1) second.
Advanced Bot Protection checks traffic against its list of known threats. Known threats include a mix of known
violators, data centers, identities, aggregator user agents, and automated browsers. For example, if Advanced Bot
Protection has detected a known violator on another site, your own site is automatically protected from that threat.
VPN exit nodes usually come from data centers, so Advanced Bot Protection would detect those with this check.
Does the SDK work when a user clicks a certain link, or can it run in "stealth" mode all the
time?
The SDK is in constant operation. Every few minutes, the SDK automatically does a full check of the device it is
running on and reports back to the Advanced Bot Protection instance on the threats it has detected (emulators, device
farms, jailbroken devices, etc). Advanced Bot Protection then tracks violations by issuing a temporary token, which is
included on each request back to origin for each API call.
Yes.
No.
However, Xamarin appears to have support for binding to Objective-C and Java code. Thus Xamarin seems likely to
work with some effort. For more information, see Microsoft's documentation on Objective-C and Android-callable
wrappers.
AAR.
Is the iOS variant a precompiled framework (.framework files) or a plain static library?
Yes, if care is not taken. This is normally handled in one of three different ways:
• Allow list the user agent of old versions (if the version is in the user agent) until you reach a critical mass of users
on the new platform.
• Create new API endpoints for the version of the app with the SDK (e.g api2.example.com) and protect that
version only.
• Force the end-users to upgrade once your new application containing the SDK is launched.
Objective-C.
Does the iOS SDK compile if Bitcode is enabled in the project? This is app thinning related, our
builds use Bitcode for App Store distribution.
Yes, but there can be incompatibilities in the bitcode depending on which XCode (LLVM) version is used. The version of
XCode Imperva uses to compile the SDK is documented in the README.
I see the prefix "debug:" on the token value sent by the SDK. What does this prefix mean? Do I
need to remove it or do something? differently?
An internal error may prompt the SDK to return a debug: prefixed token instead. This token contains extra error
information and should be sent with the request same as the normal token. Please open a support ticket in order for
us to investigate the root cause of the internal error.
Verify that you are using a recent version (2.x or 3.x) of the Advanced Bot Protection SDK. The SDK follows Google’s
guidance for SDKs to target API 29.
Note: These instructions include procedures for actions in non-Imperva proxies. While these
procedures were tested and found correct at the time of writing, they may change without
Imperva's knowlege and thus Imperva cannot take responsibility for their accuracy. For more
information, see the relevant documentation.
Follow the procedure below to integrate Advanced Bot Protection with Cloudflare.
Notes:
• These instructions include procedures for actions in Cloudflare. While these procedures were
tested and found correct at the time of writing, they may change without Imperva's
knowlege and thus Imperva cannot take responsibility for their accuracy. For more
information, see the Cloudflare documentation.
• The machine on which you build the integration package must have Node.js installed.
• On every page in your web domain that you want to protect, you must add the following line
in the html header section:
where challenge-path-value is the same text string that you enter into the CHALLENGE_PATH=
statement in the config.js file.
It is recommended that you create a name for the challenge path that looks as if it is part of your
own web application. This will decrease the likelihood that the protection is blocked by
adblockers.
If you have not yet added a Website, add one by referring to Creating a Website Group and Adding a
Website.
For example:
CONFIGURATION=
analysisHost: "https://bon-staging.distil.ninja",
apiKeyId: "0068595c-9a8e-567o-2hrt-e3f32729ccbe",
apiSecretKey: "vB3xdfnedieufskHsbp/+7PnakbRbI4BNS3",
debugHeaderValue: "4286e123456789fd077d4720f85e72c7bdcc",
tokenEncryptionKey: "xmvbtv/abcDEfGRFIfBN/
XdPQTsYv7PpmF6GRJOF7ZpMbRCw7BUphPPuumxMleE/+QbUFTfXysCpHNELjmP3FA=="
8. After the CHALLENGE_PATH= statement, paste the text string that represents the challenge path as you
entered it into your web pages. See the notes, above.
9. Save the config.js file.
2. If you want to change the default setting for failure handling to a more rigorous setting at a cost of greater user
latency, configure failure handling. For more information, see Understanding Failure Handling for Advanced Bot
Protection with Third Party Products.
3. Build the integration package:
1. Verify that you have Node.js and npm installed. For more information, see the npm documentation.
2. Open a CLI application like Terminal or Command Prompt.
3. Navigate to the unzipped/unpacked folder/directory that contains the Reference Implementation.
4. Run the command npm install. npm fectches additional dependencies in order to work.
5. Run the command npx webpack. The dist/index.js file is created.
6. Open the dist/index.js file in a text editor.
7. In your Cloudflare account, create a new Worker and assign a Route.
8. In the Cloudflare editor, delete the sample Cloudflare code in your new Worker.
9. Copy the content of your dist/index.js file and paste it into the new Cloudflare Worker.
10. Save the Worker.
11. Add a Route and assign your new Worker to the Route. This Route should encompass all the pages you
wish to protect. The most straightforward and easy route is of the form *.example.com/* .
12. Save the Route.
4. Test your integration. For more information, see Testing the Integration of Advanced Bot Protection with Third
Party Products.
Follow the procedure below to integrate Advanced Bot Protection with Lambda@Edge on AWS Cloudfront.
Notes:
where challenge-path-value is the same text string that you enter into the CHALLENGE_PATH=
statement in the config.js file.
It is recommended that you create a name for the challenge path that looks as if it is part of your
own web application. This will decrease the likelihood that the protection is blocked by
adblockers.
• The most robust strategy for bot protection involves inspecting all requests to your site. In
order to inspect all requests to your site, add the Lambda function to all dynamic content
Behaviors, including the default Behavior, in step 5, below.
If you have not yet added a Website, add one by referring to Creating a Website Group and Adding a
Website.
For example:
CONFIGURATION=
analysisHost: "https://bon-staging.distil.ninja",
apiKeyId: "0068595c-9a8e-567o-2hrt-e3f32729ccbe",
apiSecretKey: "vB3xdfnedieufskHsbp/+7PnakbRbI4BNS3",
debugHeaderValue: "4286e123456789fd077d4720f85e72c7bdcc",
tokenEncryptionKey: "xmvbtv/abcDEfGRFIfBN/
XdPQTsYv7PpmF6GRJOF7ZpMbRCw7BUphPPuumxMleE/+QbUFTfXysCpHNELjmP3FA=="
8. After the CHALLENGE_PATH= statement, paste the text string that represents the challenge path as you
entered it into your web pages. See the notes, above.
9. Save the config.js file.
2. If you want to change the default setting for failure handling to a more rigorous setting at a cost of greater user
latency, configure failure handling. For more information, see Understanding Failure Handling for Advanced Bot
Protection with Third Party Products.
3. Build the integration package:
1. Verify that you have version 14 of Node.js and npm installed. For more information, see the npm
documentation.
2. Open a CLI application like Terminal or Command Prompt.
3. Navigate to the unzipped/unpacked folder/directory that contains the Reference Implementation.
4. Run the command npm install. npm fetches additional dependencies in order to work.
5. Run the command npm run build:package. The lambda-function.zip file is created.
4. Create a Lambda@Edge function:
1. Log into your AWS console.
2. Select Services > Lambda. Note that you must be in the US East Coast (N. Virginia) Region.
3. Click Create function. The Create function window appears.
4. Type a Function name, for example, distil.
5. Verify that the Runtime option is Node.js. 14.x.
6. Expand Choose or create an execution role.
7. Select Create a new role from AWS templates.
8. Type a Role name, for example, distilrole.
9. Under Policy templates, select a policy that can invoke Lambda function at Cloudfront edges, for
example, Basic Lambda@Edge permissions.
10. Click Create function. The Functions window for your new function appears.
11. Under Function code > Code entry type, select Upload a .zip file.
12. Under Function package, click Upload. In the dialog box, navigate to the lambda-function.zip file you
created in Step 2.
13. Click Open. The dialog box closes.
14. Click Save.
15. Select Actions > Publish new version.
If you are using AWS's Legacy cache settings, perform the following steps:
▪ Click Edit.
▪ Set Headers to All.
▪ Set Query Strings to All.
▪ Set Cookies to All.
▪ Under Lambda Function Associations, verify that the CloudFront Event is set to Viewer Request,
and paste the ARN that you copied earlier into the Lambda Function ARN field.
▪ Click Include Body.
▪ Click Yes, Edit.
If you are using AWS's Cache policy and origin request policy (recommended):
If you are an existing Advanced Bot Protection user with an existing policy, select your existing policy from
the Cache policy drop down list, then:
▪ Under Lambda Function Associations, verify that the CloudFront Event is set to Viewer Request,
and paste the ARN that you copied earlier into the Lambda Function ARN field.
▪ Click Include Body.
▪ Click Yes, Edit.
Since the introduction of Node.js 14.x, Amazon Web Services no longer supports Node.js 10.x. If you originally set up
your Advanced Bot Protection environment to work with Node.js 10.x, you must do one of the following:
• Deploy your current lambda function (built with node 10) to Cloudfront, specifying node 14 as the runtime
• Rebuild the ABP Connector 1.21.2 release package (with your personalized config.js) using node 14 and deploy
the resulting Lambda function, specifying node 14 as the runtime
To deploy your current lambda function (built with node 10) to Cloudfront, specifying node 14 as the runtime
The above procedure should work without error. However, a more certain but more involved method is as follows:
To rebuild the ABP Connector 1.21.2 release package (with your personalized config.js), using node 14 and deploy the
resulting Lambda function specifying node 14 as the runtime:
1. In Integrating Imperva Advanced Bot Protection with Lambda@Edge on AWS Cloudfront, carry out Step 3 in its
entirety.
2. In AWS, select Services > Lambda > Functions.
3. Select your existing function.
4. Select Code Source > Upload from and select .zip file.
5. Navigate to the file you just created in Step 1 and click OK.
6. Under Runtime, select Node.js 14.x.
7. Click Save.
8. At the top tight, select Actions > Publish new version.
9. In AWS, select Services > Lambda > Functions > Copy ARN.
10. In AWS, select Services > Lambda > Cloudfront.
11. Select the distribution and click the Behaviors tab.
12. Select the first behavior and click Edit.
13. Under Function associations > Viewer request, replace the Function ARN / Name with the new ARN that you
copied.
14. Click Save changes.
15. Repeat steps 11 - 13 for the second (Default) behavior.
Follow the procedure below to integrate Advanced Bot Protection with F5.
Notes:
• These instructions include procedures for actions in F5. While these procedures were tested
and found correct at the time of writing, they may change without Imperva's knowlege and
thus Imperva cannot take responsibility for their accuracy. For more information, see the F5
documentation.
• On every page that you want to protect in your web site, you must add the following line in
the html header section:
where challenge-path-value is the same text string that you enter into the
CHALLENGE_PATH= statement in the settings.js file.
It is recommended that you create a name for the challenge path that looks as if it is part of your
own web application. This will decrease the likelihood that the protection is blocked by
adblockers.
• The file `imperva.tcl` contains an example integration which will work out-of-the-box. It can
be modified based on your requirements, however Imperva cannot guarantee functionality if
the rule is modified.
If you have not yet added a Website, add one by referring to Creating a Website Group and Adding a
Website.
6. In your unzipped/unpacked folder/directory that contains the Reference Implementation, locate the file
credentials.js and open it using a text editor like Notepad++.
7. Paste the copied block of code from Credentials as the entire content of the credentials.js file. Save the
file
8. Open the settings.js file using a text editor and edit it as follows:
"CHALLENGE_PATH": "/my-challenge-path",
"SDK_CHALLENGE_PATH": "/my-sdk/v1/challenge",
"TLS_TO_ORIGIN": "false"
where
▪ </my-challenge-path> is the path that the Connector will use to inspect traffic. It is
recommended that you make this path look like part of your website so that it is not blocked by end
user's addons and adblockers.
▪ </my-sdk/v1/challenge> is the path that the Imperva Advanced Bot Protection Mobile SDK
will use to transmit challenge data to the Imperva backend.
▪ TLS_TO_ORIGIN - If your load balancer uses HTTPS to communicate with your backend pools, set
this value to the string "true". If not, leave it as "false", in order to have the load balancer
offload SSL and communicate with the backend pools via HTTP.
9. Save the settings.js file.
2. Create the F5 plugin:
Note: Since the Advanced Bot Protection plugin requires the f5-nodejs library which is provided by
F5, you must provide an exported workspace to be repackaged. If you do not have such a plugin
you may create one with the following steps.
12. Using the command line, navigate to the directory where you extracted the Imperva provided Reference
Implementation.
13. Build the config generator docker container by running the following command:
14. Run the container sharing the current directory with the container's /usr/imperva-f5 directory. Examples
for various shells are as follows:
▪ bash: docker run -it --rm -v $(realpath .):/usr/imperva-f5 imperva-
config
▪ fish: docker run -it --rm -v $PWD:/usr/imperva-f5 imperva-config
▪ powershell: docker run -it --rm -v ${pwd}:/usr/imperva-f5 imperva-config
Notes:
▪ If you see a pop-up about sharing your filesystem with the container, select
allow.
▪ If no pop-up appears and the container gives an error about lack of filesystem
permissions, open your Docker settings, go to Resources > File Sharing and
click on the + icon to add a new directory. Add the directory from step 1 where
you extracted the Imperva provided archive and click on Apply & Restart.
▪ If your shell is not one of those in the above examples, refer to the Docker
documentation to see how to share your local fileystem with the container.
After the container runs, there will be a new file in your directory imperva-f5.tgz. You will use this file in the next
section to install the integration.
Note: The default serverssl profile is acceptable if none is already in use. Click update if adding
the server SSL profile
Adding this profile when offloading SSL on the load balancer and sending HTTP requests to the
origin (TLS_TO_ORIGIN: "false") may cause an outage until the integration is activated. Be
sure to activate this profile during a scheduled maintenance window only.
The intended audience for these particular instructions is seasoned Linux engineers familiar with the command line
and capable of debugging and troubleshooting. Please contact your Imperva Sales Engineer and Account Executive if
you are unsure about utilizing the OpenResty connector.
Notes:
1. Get the example integration code - the Reference Implementation - by clicking on the appropriate link in the
Advanced Bot Protection Integration Library. Download the zip file to a location on your computer.
2. Unzip/unpack the zip file to a location on your computer.
3. Follow the instructions in the example given in the readme file from the Nginx/Openresty integration library.
Edit this file, and those edits are shown on the page to bots (or humans, in the case of a false positive). It may be
helpful to include javascript to display the local time, IP address, and other debug information.
You can configure the language for the captcha itself via the window.geetestLang and window.recaptchaLang
variables by specifying the desired language. For more information, see each captcha provider's documentation.
Note: In the interstitial page there are several fields surrounded by the {{ and }} characters. These
are template strings for the integration and should remain in the page.
This procedure test for the minimum backend connectivity and confirms that the Advanced Bot Protection API is
indeed accessible to requests on the protected web server.
To test minimum backend connectivity of the Advanced Bot Protection integration with Connectors using the debug
header:
1. Verify that the config.js file that you use in the integration has a value for the parameter x-distil-debug. This
should have been done during the integration process.
2. Either:
User the debug header extension of your browser to send the test value to the ABP server:
1. In the extension to your browser that can add or modify http request headers, set a parameter x-distil-
debug as the same value that x-distil-debug has in the config.js file.
2. Use the browser to make the specific x-distil-debug request. For more information, see your browser's
documentation.
or:
Use a curl command to send the test value to the ABP server:
Testing the Functionality of the Integration of Advanced Bot Protection with Connectors Using
the Script
This procedure tests that that basic action functionality of Advanced Bot Protection is working properly on the
protected web server.
The procedure tests the functionality of these basic actions, with the following results if working:
To test the functionality of the integration of Advanced Bot Protection with Connectors using the script:
Notes:
▪ Verify that captcha is enabled for the site you are testing or the captcha checks will not work.
For more information regarding cptcha on CloudWAF, see Web Protection - Security Settings.
▪ For more information regarding captcha on the Connectors, see Understanding the Website
Advanced Settings.
▪ When using Lamba@Edge, verify that the lambda function is associated with the path you
are testing or add it to all the non-static behaviors For more information, see Integrating
Imperva Advanced Bot Protection with Lambda@Edge on AWS Cloudfront.
If the API was down causing an error on timeout, no traffic would be allowed into the website and the website would
be effectively disabled. To avoid this situation, you can employ one of two available strategies for dealing with API
failure:
• Maximum Efficacy: This strategy strives to process the maximum number of requests, even at the expense of
latency experienced by end-users.
You instruct Advanced Bot Protection to process all requests. If after the user-configurable timeout, say, 2
seconds, the a request is not being handled by the API, the system allows that request to go direct to the origin.
And so on with subsequent requests. Once the problem is resolved and requests are again being processed, all
requests are routed through the API and bot protection in optimized.
• Minimum Latency:
This strategy disengages protection after a certain number of requests have failed, thus resulting in requests not
going to the API at all.
You instruct Advanced Bot Protection not to process any requests if there is any latency at all. Then, after the
user-configurable timeout, Advanced Bot Protection attempts to process another request. If successful, all
requests are routed through the API and bot protection is optimized.
To configure a failure handling strategy for Advanced Bot Protection with Connectors:
1. After you have downloaded the third party Reference Implementation package to your machine and before you
execute the integration, navigate to the src folder in the unzipped/unpacked folders.
2. Open the index.ts file in a text editor.
3. In the subsection about fetcher, edit the newDistil.<failure-handling-strategy-parameter> as
follows:
4. If you like, you can also change the timeout value in the line below.
5. Save the file.
The positive value or negative utility of a bot is a complex issue, and is not a function of the bot and how it functions,
but of the following:
• What it is designed to do
• How it impacts a business
• The perspective of the organization on the bot’s activity – as will be seen, not everyone in an organization has
the same view regarding a particular bot.
This means that there is no simple binary block/allow choice when a certain type of traffic is identified. Indeed, there
are good bots that everybody knows about, but there are many bots which lie in a gray area and so you must very
carefully examine the effects of such a bot, and understand how it impacts your particular business, to intelligently
tailor your defense against it.
• Obvious good bot: A search engine crawler is a bot that crawls your website in order to index it and rank it on
search engines. This is universally regarded as a good bot – everyone wants their business’s website to place
high on searches.
• Obvious bad bot 1: An Account Takeover attack uses bots to bombard a website login form in order to try to
crack user credentials. This is generally regarded as a bad bot – no-one wants the reputation as a website from
which user data can be easily stolen.
• Obvious bad bot 2: A price scraper is a bot that takes pricing data from a website and uses it elsewhere. This
involves massive amounts of data. In the case of one seller of car parts and service, a bot added every single
item in the inventory to a shopping cart. The downside of that was load. This amount of activity created load
that crashed the website.
• Complex bot: What about a bot that buys tickets from a vendor – say for an event – and then relists them for a
much higher price? This is called snatching limited availability inventory. Here things get complicated.
The Sales department is happy as their tickets get sold. However, Customer Service and Marketing are unhappy
because of upset customers complaining about the inflated prices of the resold tickets and the unavailability of
tickets at the advertised price – denial of inventory.
The organization needs to decide if the activity that is driving sales is worth the brand damage. You may have
already decided to take action against this type of abuse. If so, the instructions below show you how.
The significance of the dilemma from this last cannot be overstated. It illustrates a vital principle here and that is that
you must carefully tailor and focus your bot solution to your precise business needs.
Further, bot attacks are not always security threats. For example, imagine a company whose business is a global
distribution system (GDS) for the airlines industry – a clearing house for flight sales. The airlines’ customers use the
GDS service to search for complex flight combinations and the airlines pay the GDS company for each search. The
price is based on a presumed ratio of look-to-book. But a bot that is price scraping on the GDS website increases the
"looks" without altering the "books", thus inflating the airlines’ overage fees for using the GDS service. The presumed
ratio, together with the entire business model, is ruined.
In summary, with Advanced Bot Protection, you want to deploy protection narrowly to solve a specific problem or use
case that is meaningful to your business, one that eases a measurable and felt pain. Remember that blocking bots
means blocking traffic, and traffic is one of your most valuable assets, one you have invested much resources to
generate.
Bad bot problem How it hurts the business Signs you have a problem Industries targeted
Increase in customer
Stolen credentials tested on
account lockouts and
your site. If successful, the
customer service tickets.
Account Takeover (aka ramifications are account Any business with a login
Credential Stuffing, lockouts, financial fraud, and page requiring username
Increase in fraud (lost
Credential Cracking) increased customer and password
loyalty points, stolen credit
complaints affecting customer
cards, unauthorized
loyalty and future revenues.
purchases).
Increase in chargebacks.
Bad bot problem How it hurts the business Signs you have a problem Industries targeted
Messaging platforms
Abnormal increases in new
Free accounts used to spam
account creation.
messages or amplify • Social media
propaganda. • Dating sites
Account Creation (aka Increased comment spam.
• Communities
Account Aggregation)
Exploit any new account
Drop in conversion rates
promotion credits (money, Sign-up promotion abuse
from new accounts to
points, free plays).
paying customers.
• Gambling
Bad bot problem How it hurts the business Signs you have a problem Industries targeted
Increase in abandoned
Bots hold items in shopping items held in shopping
Scarce or time-sensitive
carts, preventing access by carts.
items
valid customers.
Decrease in conversion
Denial of inventory • Airlines
Damaged customer reputation rates.
• Tickets
because unscrupulous middle
• Retail
men hold all inventory until Increase in customer
• Healthcare
resold elsewhere. service calls about lack of
availability of inventory.
Real estate, third party vendors like Retail Attacks on the API layer, Data Scraping,
Business Services
platforms, CRM systems, business metrics Account Takeover
Food delivery services, online grocery Credit Card Fraud, Gift Card Fraud, Account
Food & Beverages
shopping, food & beverage brand sites Takeover
Nonprofits, faith and beliefs, romance and Data Scraping, Account Takeover, account
Society relationships, online communities, LGBTQ, creation, testing stolen credit cards on
genealogy donation pages
Sports Sports updates, news, live score services Data Scraping (live scores, odds etc.)
You may have a very strong idea of what your organization’s bot problem is. On the other hand, you may simply
be responding to a general "get bot protection" instruction with only a vague concept of what it means. If you
fall in the latter camp, use the above table as your starting point, identifying the vertical which defines your
business, and the bot vectors that that vertical is likely to attract.
2. Finding the paths on the application that are associated with the use case
Bot writers spend enormous effort studying their targets so that their bots get right to the vulnerable points as
efficiently as possible. This reconnaissance includes creating fake accounts to study the target paths, studying
the target’s javascript tags, analyzing the target’s cookies and, most importantly, identifying the URLs and APIs
that are pertinent to their objective.
Understanding this last point is critical. Bots target particular URLs, those that suit their objectives. A credential
stuffing bot focuses on the URL that submits and validates credentials. A flight data scraping bot targets the URL
that sends the search query to the origin that returns the flight search results. And so it goes. Bots do not waste
their time on other parts of the application. They go directly to where the value is.
Once you have identified your use case, find the paths on your application that are associated with that use
case. Those paths are where you are going to set up your defenses.
Be aware that a common mistake is using the page of the form and not the true submission request. For
example, a user goes to www.website.com and clicks on "Login", which takes the user to the page
www.website.com/Login with a form to submit their sign-in credentials. You might might think this needs to
be protected. However, on inspection with browser tools, you click "Log Me In" after typing in credentials and
you see that it sends a POST to www.website.com/authenticate. This is the path you need to focus on. A bot
might hit the /login page, depending on the application and how they script their bot, but the end target is
really /authenticate, and is what you want to be zeroed in on in protection tuning and reporting.
You know your use case, and you know the paths on your application that are associated with it. How do you
proceed?
Imagine you are setting up protection against credential stuffing/cracking, i.e. account takeover (ATO). The login
endpoints that you have are as follows:
Bots, especially advanced bots, are persistent by nature and will quickly move to the other endpoints on the
application that allow them to accomplish the same goal, but that have weaker defenses in place. Thus it is not
sufficient to onboard just the www endpoints to your ABP protection. You need to onboard them all, including
secure.website.com/signin and m.website.com/signin. It is important that you think ahead.
Be thorough. Make sure that you have covered all your endpoints including those on legacy applications. If your
APIs are used by native mobile applications, you will require Imperva's Mobile SDK to adequately defend those
APIs. For now, you may get by allow listing them if they are not abused. Contact your Imperva account
executive, sales engineer, or customer success manager if you have any question about what is necessary.
You know the use case, you know the paths, now you need to enable protection for those paths, and those
paths only.
▪ It is easier to show your organization’s leadership the value of ABP if you can show specific examples of,
"we got attacked here, we applied appropriate protection, and here we can show that that attack vector
has been mitigated."
▪ It promotes a prioritization focus and learning function. Catch-all processes often do not work or cause
excessive attention to attack vectors that have marginal effect. Focusing protection on the HVT paths only
is much more cost effective.
▪ It reduces the damaging effects of false positives. Since blocking bots means blocking traffic, and traffic is
your website’s asset, it is of critical importance to block only where absolutely necessary. Start at your
main pain point. See from the dashboard how successful that is, from the points of view of blocking bots
and keeping false positives low. Then you can begin to experiment with expanding coverage to the less
important paths.
5. Building reporting views that show detected bots on these HVT paths
When creating dashboards that are designed to show the effectiveness of the bot protection, the rate of false
positives and other data, a common user error that causes much data to be buried is to pull site-wide statistics.
You must focus on the use case in order to really understand the effects of your ABP intervention.
This is where the value of having specific paths defined at the very beginning comes full circle. It becomes a lot
easier to show traffic charts, % bot traffic detected, and generally building a more focused understanding of
what ABP is (and, possibly, is not) detecting. Often, addressing a false negative can be as powerful to proving
legitimacy as speaking to the true positives.
The easiest approach here is grouping sections by use case, and bundling all the paths to those use cases. From
there, you can show time series charts that clearly label the detected bot vs other traffic. You can give overall
statistics and percentages that you can use in your organization, and lastly couple it with data like "top
offenders."
When you are ready to begin with ABP, you should have a Website or set of Websites you are looking to onboard. If you
are using CloudWAF you first need to onboard the Website to CloudWAF. For more information see Getting Started
with Imperva Advanced Bot Protection. If you are using a standalone Connector, follow the steps here Getting Started
with Advanced Bot Protection - Using a Connector for your respective integration point.
Next you want to enter Advanced Bot Protection’s settings, and from there set up a Website Group.
Setting up Website Groups is intuitive when you have only one or two Websites. It can become trickier if you intend to
put lots of distinctive Websites on ABP, or lots of Websites fronting the same underlying application (for example:
site.com, site.co.uk, site.de, site.fr, etc)
The main advantage to Website Groups is that you are grouping Websites so that changes to protection configurations
will apply to all Websites under that Website Group. For example, if you create a Website Group called My Website
Group and add website1.com, website2.com, and website3.com to it, when you add a Condition to block IP =
1.1.1.1, that Condition will apply to all three of those Websites.
1. Does your organization have very strict change control processes, where they have to run changes
through a lower environment?
Many organizations use a testing or staging environment on which added features, bug fixes and other changes
are tested before being deployed to the production environment. Such testing or staging environments are
called "lower" environments.
With bot protection, the test results on a lower environment do not necessarily carry over into the production
environment. First, you may have automated testing in the lower environment that you do not have in
production. Further, the production environment, with its greater number and variety of real clients, simply
cannot be mimicked by the lower environment.
Resist the temptation to create a Website Group for each environment. Each additional Website adds more
complexity and overhead. Additionally, think of ABP Conditions as traffic signatures, not code deployments.
Enabling these Conditions in lower environments is a fundamental miss when the intent is to mitigate against
false positives. The safest way to mitigate false positives is to measure what traffic the Condition is tagging on
actual production traffic in Passive mode by using the ABP reports. The thinking, "I enabled it on my QA site and
we tested it, so let’s enable it on Production since no issues came back," is fundamentally wrong. These are
traffic signatures that can and should be measured against production traffic.
2. How many Websites do you intend to protect? One? A few? 10-20? Or hundreds?
The number of Websites you want to add can change how to set up Website Groups. The main benefit to
grouping Websites into a single Website Group is that it allows for bulk management. The main benefit to
breaking out Websites into multiple Website Groups is the ability to get granular with protection settings for any
given Website. When you have lots of Websites, it tends to be the case that maybe one or a few of them drive the
majority of the business revenue. Then the rest fall into a long tail of less impact/importance. In that case, it
might make sense to group that long tail into a single Website Group, while the main business-driving Websites
are broken out, for more fine-tuned control. Those higher value target Websites are also the more likely to get
targeted, where the fine-tuning will be required.
If you intend to onboard many domains, are they entirely different applications (abc.com and xyz.com) or the
same application segmented by geography or business sector (bank1.com, bank2.com, bank3.com, etc or
site.com, site.de, site.fr, etc ).
When you have a large number of distinctive sites, you must understand that there are going to be limitations in
how effective and fine-tuned the protection can be. If you are looking for more of a checkbox solution, you
could group all the distinctive Websites into a single Website Group and keep things simple. But this approach
begins to unravel when individual Websites have issues with the blanketed protection setup. However, if you
have many Websites that are all the same underlying application, then this is a perfect use case for Website
Groups (Website Group name = "MySite", Websites include: mysite.com, mysite.co.uk, mysite.ie, etc.) This is
particularly useful if you have a single application that is skinned and hosted for all the respective countries in
which it operates (.co.uk, .ie, .de, .com, etc.).
If this is the case, it is best to group those Websites into a single Website Group. For example: The user types text
in site.com, gets redirected to www.site.com, and goes to log in. They click on Sign In, fill out their credentials,
and this fires off an API call to signin.site.com. In this case, it makes sense to have a Website Group called
"site.com" and put in it all the pertinent Websites: site.com, www.site.com, and signin.site.com.
Once you have your Website Group, you step into the next parts of ABP which allow you to define "what protection to
apply" and "how/where to apply it".
When you onboard a Website to ABP out of the box, it has two Per-Path Policy Assignments (for more information, see
Understanding per-Path Policies):
(?i)\.(gif|png|jpe?g|css|js|ico|svg|swf|webp|otf|woff2?|ttf|eot|txt)$
This is a path match regexp for catching static assets that most customers do not wish to protect. If you need to
protect some of these assets, remove an extension from the list. If you need to add a static asset to ignore, add its
extension to the list along with a 'pipe' operator |."
This is a path match catch-all for all other requests, since every URL has a "/".
If you think about all of your Website’s traffic as a pie, think of per-Path Policy Assignments as your way to carve it into
individual slices. Out of the box, there are two slices: static assets and everything else.
Let’s say you want to carve out one more slice, for your Login traffic. You can add a per-Path Policy Assignment, using
either a prefix match or regexp match, to scope /Login (for example).
Per-Path Policy Assignments provide two values. Firstly, per-Path Policy Assignments take in a Policy assignment. This
means you can have your login traffic protected by a different Policy from everything else, perhaps more rigorous
checks versus the broader website. Secondly, per-Path Policy Assignments add value on the Reporting side. They help
classify and categorize traffic in clean buckets, so that you can see traffic broken down by these per-Path Policy
Assignments. Extending that concept out, think of per-Path Policy Assignments as your use case classifier. This helps
promote the narrative within your organization. If you are concerned about ATO, you can scope a per-Path Policy
Assignment that maps out your login traffic, and then you can build reports specific to that per-Path Policy
Assignment to show proof of value very quickly and intuitively.
For ABP, per-Path Policy Assignments address "how/where to apply my protection". Policies are the containers that
hold the individual protection rules and "actions to take".
By scoping per-Path Policy Assignments and then assigning them a Policy, we create the ability to say "For this subset
of traffic (a per-Path Policy Assignment), apply these protection settings (a Policy)."
Policies
Think of Policies as a container that groups individual rules - Conditions. It is our "protection profile". A Policy comes
standard with the following Directives (for more information see Understanding the Structure of the Policies and the
Default Policy):allow , block, captcha_cleared, captcha, identify, tarpit, delay, and monitor.
Directives
These Directives are hardened "instructions", or "actions to take" that the platform understands. When a request
comes in, the platform compares the request’s URL against the defined per-Path Policy Assignments in a top-down
fashion (the first match wins). When it finds the winning per-Path Policy Assignment, it looks to the Policy assigned to
that per-Path Policy Assignment. It steps into that Policy and, again, processes it top down.
For this reason, the out-of-the-box order of these directives is very deliberate.
Directives contain Conditions. Think of Directives as the "action to take" and the Condition as "the signal to take that
action".
Conditions
You might think of a rule/rule engine in the structure of if this then do that. Here, the condition logic is the "if this" and
the directive you place the condition under as the " then do that".
A Condition can be written using many various elements. See an example below:
A single Condition houses any number of individual Signatures. Signatures can be written using:
• HTTP Headers
• Flags: think of these as the underlying client interrogation challenges of ABP
• ABP Metadata like rate limit counters and platform fingerprints/identifiers
When you create a new Website Group on ABP, it comes by default with two per-Path Policy Assignments: static assets
(no Policy applied to these assets – i.e. allow-listed), and a catch-all "contains /" per-Path Policy Assignment which has
the Website Group's Default Policy.
The standard Default Policy comes with the five standard Directives discussed above, and there are several Conditions
pre-inserted into those Directives. These pre-inserted Conditions are Managed Conditions. Think of them as your out-
of-the-box, product-endorsed rule sets, i.e. "These are the settings you should enable, and you should enable them in
the Directives in which they have already been placed, because that is the recommended action to take for that
respective threat."
All Conditions fall into one of two categories: Managed Conditions and Custom Conditions. The Conditions that are
present in the Default Policy are Managed Conditions. Every new Condition you create is considered a Custom/Other
Condition. If you want to block an IP, you use a Custom Condition. Managed Conditions are further distinguished by
the italicized annotation " Managed Condition " at the bottom of the Condition block (see the screenshot above).
Many of the Managed Conditions are simply composites of platform Flags. Remember, Flags are the core client
interrogation checks of ABP. For example, there is a Flag that checks if the browser is running Selenium. That Flag is
part of the Automation Managed Condition that checks for various automation tools. That list of flags under the
Automation Managed Condition is:
(any
flags.automation_casper_js
flags.automation_chrome_driver
flags.automation_firefox_driver_1
flags.automation_firefox_driver_2
flags.automation_firefox_driver_6
flags.automation_ie_driver
flags.automation_phantom_js
flags.automation_selenium_ide
flags.automation_unknown_driver
flags.automation_visual_web_ripper
flags.script_dict_shell_ui_automation
flags.web_driver
While Custom Conditions are as simple to understand as "any condition that is not a Managed Condition", note that
most of the Custom Conditions that get added are going to be to meet your specific needs. ("I need to allow-list my
office IPs." "I have to allow-list this header for our pentest." "This organization needs to be blocked from accessing the
site." And so on.)
The most important thing about the Managed Conditions is their innate synergy. When combating bots, there is no
single silver bullet. Tools and advanced software can bypass many checks, and any single powerful check may also
leave backdoors open. This means that you need a layer of rule sets that target and look for specific things that
collectively come together as a single powerful defense policy.
Imagine you are running a bar. You do not want to let anyone in who is underage. You also do not want to let anyone in
who is acting reckless and possibly endangering those around them. If all you did was an ID check and then allow
individuals in on that single yes/no, you will eventually let reckless people in. If all you did was run some quick
cognizance checks on people, and ignore IDs, you would miss the underage individuals. So there is a need for multiple
checks. Bot detection follows this same idea. Layering in interrogation rulesets that evaluate the full spectrum of
browser/client details allows you to still catch bad actors, even when they circumvent one or many of the checks.
Managed Conditions collectively look for:
Any of these checks on their own merit can eventually be circumvented but the synergistic nature of all of these
pieces creates a narrow enough of a lane that bad actors have to behave within that either their operation will
become more and more costly, or they’ll be deterred in their pursuit and give up or move to another target.
This is for the most part done manually, by examining your setup in Distil Bot Defender and creating a similar one in
Advanced Bot Protection.
• You have agreed upon a deployment topology for your migration with your Imperva Sales Engineer. If you have
not already done so, contact your Imperva Account Executive to schedule a call with a Sales Engineer.
• You have added your first website in CloudWAF. This applies only if you are migrating to CoudWAF. The Website
should be the same website you have with the Distil Bot Defender. For more information, see Onboarding a Site
– Web Protection and CDN.
• You have set up your Website and created a Default Policy in Advanced Bot Protection. For more information,
see Getting Started with Imperva Advanced Bot Protection and Creating a Website Group. This sets up your
Default Policy. The Website should be the same website you have with the Distil Bot Defender, the one you
already added to CloudWAF.
• Verify that all the prerequisites are met. For more information, see Prerequisites for Migrating from Distil Bot
Defender.
• In order to maintain protection of your assets during the migration process, as you perform the migration, it is
recommended that you use both your Distil Bot Defender environment and Advanced Bot Protection in series.
How to place Advanced Bot Protection inline with your traffic will depend on your deployment model: Consult
with your Imperva Sales Engineer should you have any questions.
• In Advanced Bot Protection, recreate the elements in order. For more information, see Recreating the Distil Bot
Defender Setup in Advanced Bot Protection.
• When all the elements have been migrated and are recreated in Advanced Bot Protection, remove Distil Bot
Defender from the request flow by pointing the Advanced Bot Protection integration point at your origin.
Consult with your Imperva Sales Engineer should you have any questions.
1. Domains: You set up the same Websites in Advanced Bot Protection that you have in Distil Bot Defender. You
may want to group similar Websites under the same Website Group. Grouping Websites of similar form and
function together allows you control your settings with a single Policy.
For more information, see Recreating the Distil Bot Defender Domains in Advanced Bot Protection.
2. Per-Path Policy Assignments for each domain (website): You set up Paths in Advanced Bot Protection that
mimic the paths you have in Distil Bot Defender. Where those Paths have different Policies assigned, you assign
per-Path Policies in Advanced Bot Protection.
For more information, see Recreating the Distil Bot Defender Paths and Per-Path Policies in Advanced Bot
Protection .
Note: Once you have done these first steps, you are at least in a monitor-only mode in Advanced
Bot Protection. If you have set up Distil Bot Defender and Advanced Bot Protection in series, your
estate is still protected by your Distil Bot Defender policies.
3. Actions: You set up Policies with their Directive and Conditions in Advanced Bot Protection that mimic the
policies you have in Distil Bot Defender.
For more information, see Recreating the Distil Bot Defender Actions in Advanced Bot Protection.
4. Custom allow list rules: You set up allow list rules in Advanced Bot Protection. These may or may not mimic the
custom allow list rules you have in Distil Bot Defender.
For more information, see Recreating Distil Bot Defender Custom Rules in Advanced Bot Protection.
You should recreate each Distil Bot Defender domain in Advanced Bot Protection.
Each Distil Bot Defender domain is really an Advanced Bot Protection Website. However, the common policy approach
facilitated by Advanced Bot Protection's Website Groups enables a simpler management of domains that once had to
be managed individually in Distil Bot Defender, even if they had the same structure and rules.
So for domains in Distil Bot Defender that have the same structure and rules, you should create a Website Group and
add a Website for each domain.
For domains in Distil Bot Defender that are unique, add a Website Group with a single Website.
For more information, see Creating a Website Group and Adding a Website.
Recreating the Distil Bot Defender Paths and Per-Path Policies in Advanced Bot Protection
For each Website Group that corresponds to a domain or group of domains in Legacy Distil Reverse Proxy, you need to
add the Paths and a per-Path Policy for each Path.
1. Discover the paths in your Distil Bot Defender deployment. You have two options:
1. If your deployment is very simple and contains a very small number of paths, then select Settings > Edit
Settings by Path in Distil Bot Defender. The paths are displayed. Make a note of them.
2. If your deployment is more complex, contact Imperva Support and ask to run a fetch script on your
Legacy Distil Reverse Proxy deployment. The fetch script provides all the information you need regarding
the paths. For more information, see Understanding the Results of the fetch Script.
2. For each discovered path in Legacy Distil Reverse Proxy, clone the Default Policy, and rename it. For more
information, see Cloning a Policy.
3. For each discovered path in Distil Bot Defender, create a corresponding path in Advanced Bot Protection and
assign a cloned Policy. For more information, see Creating a New per-Path Policy Assignment. Each path should
be defined in the Path Prefix field and the Policy for each path should be the cloned Policy you created in step 2
above.
You recreate the Distil Bot Defender actions in Advanced Bot Protection in the following ways:
• For Automated Threats Policies, Machine Learning Policies, and Rate Limiting Policies, use the results of the
fetch script to direct you as to how to configure your Policies in Advanced Bot Protection. For more information,
see Recreating Distil Bot Defender Policies.
• For Access Control Lists, create your allow lists manually in Advanced Bot Protection. For more information, see
Recreating Distil Bot Defender Access Control Lists.
The actions in Distil Bot Defender have approximate equivalents in Advanced Bot Protection. You recreate them by
following the procedure below.
1. Contact Imperva Support and ask to run a fetch script on your Distil Bot Defender deployment, or use the results
of the fetch script that you already have from setting up your paths. For more information, see Understanding
the Results of the fetch Script.
2. For each per-Path Policy that you cloned and created in Advanced Bot Protection, set up your Directives and
Conditions so that they recreate the setup that is presented in the fetch script results.
For example, say in one of your Distil Bot Defender deployment domains, there is a path for which all of the
conditions elicit the captcha action, except for Known Violators which elicits the monitor action. For this Path's
per-Path Policy in Advanced Both Protection, you may want to place Aggregate user agents, Known violator
data centers, Automation, Identify eventually, and Rate Limiting in the Captcha Directive, and Bad user
agents in the Monitor Directive. Force Identify known violators goes in the Identify Directive, always. Use the
table in Understanding the Results of the fetch Script.
The fetch script does not provide data for Access Control Lists. You must manually examine the Access Control Lists in
your Distil Bot Defender deployment and recreate these allow lists as Conditions in the Allow Directive of the relevant
per-Path Policy. You can export your Access Control Lists to a csv file if you want.
If an allow list or a deny list consists of multiple parameters (i.e. there is a mix of IP addresses, header names,
user agents and so), create a new Condition Group in Advanced Bot Protection that contains Conditions each of
which specifies a parameter and lists the same values, and add it to the Allow (for allow list) or Block (for deny
list) Directives of the pertinent Policy. For more information, see Adding a Condition Group.
Over the years of creating support tickets for Distil Bot Defender, Distil support may have created many custom allow
list and deny list rules.
Generally, it is recommended that you do not recreate these custom deny list rules in Advanced Bot Protection but
rather that you allow those bots be caught retested by the Advanced Bot protection mechanism, before recreating
them manually.
To recreate custom allow list rules you need to use the Advanced Bot Protection Custom Condition, and be familiar
with the Moi language. Because you do not have access to view the custom allow list rules on your account, it is
recommended that you request Imperva Support to perform this task for you.
Your Imperva support engineer can run a fetch script on your Distil Bot Defender deployment at any time.
Note: The fetch script provides results only for Automated Threats Policies, Machine Learning
Policies, and Rate Limiting Policies. The fetch script does not provide results for Access Control
Lists.
The fetch script returns data in the form of a csv file, which looks like this:
• This one account under account_id has three domains. Each of these would be a Website in Advanced Bot
Protection.
• www.myfirstsite.com and www.myecomdemo.com have similar path protection policies so you could put them
in the same Website Group.
• The path column provides the paths for different policies for each domains. These are the paths you recreate in
Advanced Both Protection.
• The FID column determines for that path if Force Identity is enabled or disabled.
• The remaining columns show the actions that are taken for any condition on that path. The actions are:
• monitor: recreated by monitor in Advanced Bot Protection
• captcha: recreated by captcha in Advanced Bot Protection
• drop: recreated by block in Advanced Bot Protection
fetch Script Column (from Legacy Distil Reverse Proxy) Advanced Bot Protection Equivalent
Prior to the acquisition of Distil Networks by Imperva, Distil was working on a new Bot Mitigation platform that offered
better scalability, customizable rules and policies, significantly better bot efficacy and much easier integrations into
your environment. This product, now called Imperva Advanced Bot Protection (ABP) was released last May and over
the course of several months has really widened the gap between itself and Distil Bot Defender.
Imperva feels very strongly that ABP provides the best bot mitigation Imperva has to offer with the greatest degree of
scalability and flexibility.
Q2. What am I expected to do and what type of support does Imperva provide as part of the migration process?
• Product documentation
• A migration guide
• Walk through video
• A migration webinar
• Assistance with and validation of your first onboarded site
• Two additional progress checkpoints
• Support using our support system
Imperva anticipates that no more than two progress checkpoints are required.
• Migrating to one of the available integrations is a very straightforward process. Getting an Advanced Bot
Protection account setup takes a few minutes and the available integrations are designed so that they can be
implemented by your teams.
• Additionally, migrating to CloudWAF requires very little effort on your end. Our team will create your account for
you, and all you would need to do is issue a new SSL for your domains (which you can do through the Imperva
portal) and migrate traffic on to the appropriate CNAMEs.
Q4. Will Imperva continue to update the Distil Bot Defender up until the EOL date?
• Security patch deployments for critical security notices for which a configuration-based work around is
unavailable
• New compliance or legal requirements for which a configuration-based work around is unavailable - on a case
by case basis
• Browser changes/updates that require modification of some part of Distil Bot Defender and for which a
configuration-based work around is unavailable - on a case by case basis
Any new domain should be added to the new platform, Advanced Bot Protection (ABP).
Q6. Why should I add new domains to the new platform, Advanced Bot Protection (ABP)?
The new ABP platform provides better protection, a faster onboarding experience, and is fully supported.
Action
An Action is the title of a Directive, and describes the actual action that Advanced Bot Protection will take, should one
of the Conditions in that Directive be met by the incoming traffic. For example, the Directive called Block will block
requests should any of that Block's Conditions be met.
Directive
A Directive is a container of one or more Conditions. Each Directive is defined by an Action and contains the
Conditions that, when met by a monitored request, trigger that action.
Condition
A Condition is a container of rules, composed of Flags or code, against which the content of incoming requests are
checked. If a match is found, then that Condition has been triggered and its Directive may be acted on.
Condition Template
A Condition Template is the basis for the creation of a new Condition. The basic rule is given. That is the template. You
then fill in values for parameters in the Condition, to tailor it to your needs.
Connector
A Connector is an Integration that you can use instead of CloudWAF, if you are not using any of CloudWAF's other
services. The Connectors currently supported are:
• Cloudflare
• F5
• Lambda@Edge on AWS Cloudfront
• Nginx
Cookie Scope
When you add a Website, you can also define a Cookie Scope for that Website and related Websites.
When you add a Website, a cookie is created for that Website's Path, for example. www.example.com. However, by
default it is only visible there. You can use the Cookie Scope to expand the coverage of a cookie set up for one Website.
Set a path in Cookie Scope to define other paths that can use the same cookie. Note that his can only go as far as the
apex domain and all its subdomains.
e.g. if you want the cookie to be visible for aaa.example.com and bbb.example.com type example.com in the
Cookie Scope.
Note: If you do not set the cookie scope, the domain for the cookie will be empty. Due to
inconsistent browser handling of cookies with no domain or an empty domain, it is strongly
recommended that you do not leave the cookie scope empty.
Custom Scope
When assigning per-Path Policies, you can configure an assignment Path with a Custom Scope.
Requests to this Path (and other Paths with the same Custom Scope) are totaled separately from requests elsewhere.
So you can make sure that requests to a Path where high request rates are legitimate (like pages with images) will not
activate a block or captcha on requests to a Path where high request rates are suspicious (like a login page).
Flag
A Flag is a single building block of a Condition. It is a single "check" of an incoming request to see if that request
contains whatever code the Flag defines. One or more Flags constitute a Condition.
Integration
Integration is the term used for the layer normally occupied by CloudWAF in the architecture - the layer that receives
communications from the client and from Advanced Bot protection, and executes the Advanced Bot Protection
actions.
Managed Condition
A Managed Condition is a Condition supplied by Imperva. You can only edit its tags.
Path
A Path is a location or group of locations, within a Website, that is defined by a URL path or by a regular expression
that specifies characteristics of the page or pages.
per-Path Policy
You can assign a Policy to a certain Path in your Website, so that it is active for that Path but it is not active for other
Paths. This is a per-Path Policy, also known as a per-Path Policy Assignment.
Policy
A Policy is a single bot protection approach, characterized by a group of Directives. Different Policies have Directives
that are configured differently, to cater for the particular Website or Website Group they are protecting.
Rate Limit
Some Conditions are Rate Limit Conditions in that they are triggered when a certain action (access, for example) takes
place at higher than a certain rate or frequency, or a session lasts longer than a certain period.
Tag
A Tag is an identifier you give a Condition. In the graph analyses, Conditions with the same Tags can be displayed
together, enabling you to draw meaningful conclusions about the effectiveness of Conditions that share Tags.
Website
A Website is a single website protected by Advanced Bot Protection. It may consist of multiple Paths in which different
areas of the website are located.
Website Group
A Website Group is a group of websites to which the same Advanced Bot Protection Policies are applied.
Since many websites have clones that perform identical functions to the "parent website", e.g. for languages, like
acmebooks.com, acmebooks.co.fr, and acmebooks.co.nl, and so on, Advanced Bot Protection allows you to group
your websites into Website Groups and then you apply all your configurations to the Website Group, saving a lot of
time. You cannot apply configurations to an individual Website - only to a Website Group.
Note: From December 2021, the Advanced Bot Protection release notes are included in the Cloud
Application Security Release Notes, and are no longer published on this page. For more
information, see the Cloud Application Security Release Notes page from the Imperva Cloud
Application and Network Security guide.
Our release notes provide information on changes and enhancements in each release.
December 2021
There are two new Directives that you can use in your Policies:
• delay: If a Condition in the delay Directive is matched, the response is delayed by a few seconds. This reduces
the efficiency of attacks that rely on a fast sequence of requests.
• tarpit: If a Condition in the tarpit Directive is matched, the response is never sent. This leaves bots waiting
endlessly, but is riskier than delay as it has severe impact on human users and must be applied carefully.
November 2021
Role Based Access Control (RBAC) now applies to Advanced Bot Protection
From November 30 2021, Advanced Bot Protection is subject to RBAC in the Cloud Security Console.
Roles and permissions for non-admin users must be configured by an admin user for the non-admin users to be able
to perform any configuration actions in Advanced Bot Protection. This is done by checking the new Can edit ABP
configuration checkbox in the Cloud Security Console.
If this checkbox is not checked for a role that is applied to a user, or the user is not an admin, the user is considered to
be in read only mode.
For more information, see Manage Roles and Permissions in the Cloud Application and Edge Security
documentation.
Note that the limitations for a user in read only mode apply to the settings windows and not to the dashboards.
October 2021
• Policy Management
• Add, delete and modify ABP Policies
• Condition Management
• Add, delete and modify managed conditions
• Create, modify and delete custom conditions and condition groups
• Traffic Insights v2 dashboard: Uses aggregates which facilitates far faster loading
• Usage Report for Connectors: Shows number of requests which facilitates identification of overages
August 2021
The new flag cloud_service_provider identifies major cloud service providers. For more information, see
Understanding and Editing Conditions.
The new model identifies IPs responsible for unexpected variations in captcha solving traffic.
June 2021
March 2021
There is a new Traffic Overview dashboard that uses aggregates which allows the reports to load very quickly.
October 2021
August 2021
• Documentation updates
• Improved logging
• Updated interstitial text to inform user to
unblock JS before completing the captcha
Fastly
• JavaScript code was added to check the
browsers language setting and replace the N/A
v1.2.1
English text with language specific text in
the interstitial page. It is up to the customer
to supply the non English text but direct
translation examples for Italian and German
are included for guidance.
July 2021
Lambda@Edge • Configure headers to be masked from the Safeguard against modification to the
config.js interstitial causing the help message to
v1.22.0 • Updated dependencies to latest versions display immediately
• The connector is now shipped as two files, imperva.lua and settings.lua to simplify the installation and
upgrade process.
• set $template_path is no longer required in Openresty *.conf files. interstitial_template_path in settings.lua
must be used instead. See USAGE.md for more details. Note: If a custom value for $template_path is currently
in use, then you must add it to interstitial_template_path in settings.lua for the Connector to continue to
work