What is the general basics for the web app crawler and scanner?

This article describes some general basics for web application scanning (WAS).

Starting point

The starting point should be an exact URL, like one of the following examples, including HTTP or HTTPS. The scanner will start here and follow all links found within the scope you have selected. By doing this, the scanner will scan your entire web application.

Starting point URL examples:


Multi-domain support
You can take advantage of this feature to perform a thorough scan of multiple domains within the same web application. By including additional domains alongside your main target. All in a single scan, you can gain a comprehensive understanding of your entire web presence.

This feature allows you to assess all the domains associated with your web application in one go, ensuring that no potential vulnerabilities or risks go unnoticed.
Read more how to add additional domains to your scan here.

Scan scope

A single web application scan is limited to 8,000 pages or 24 hours of scanning time. A scan that is larger will be automatically stopped. Notice that you still will get a scan result for the pages that were scanned.


If you have the same website under HTTP and HTTPS, you can choose either one of them.


Our crawler doesn’t follow redirects between HTTP and HTTPS, for example, from to; if you have a redirect, enter the redirect target as the URL for scanning.

Exclude URLs and forms

Please read this information for URL and form exclusion:

Excluded pages

The following file formats are ignored when scanning because of static content.

  • DOC
  • DOCX
  • XLS
  • XLSX
  • PPT
  • PPTX
  • PDF
  • ZIP
  • WOFF