X-Site-Scan Results

Here you can find a detailed description on how to interpret the results returned by X-Site-Scan and what steps are required from your side to rectify any possible problems with the external links found on your website.

Scan-Statistics:

The scan-statistics give you a brief overview about the scanning process:

  • Scan started:

    The time when Site-Scan first accessed your server, making it easy to locate Site-Scan's activities in your logs. According to RFC2616 origin servers must include a date header field in their response, if it is not included or it is in an invalid format Site-Scan will use its own time and mark the displayed date red. If your server replied with a valid date header field, but the time differs by more than one minute from the time of our server, which is synchronised with an official time server, Site-Scan will mark the date orange and you should consider adjusting your server's time.

  • Server software:

    This value shows how your server identified itself to X-Site-Scan. Please note that here less is better than more, giving too much information about the modules used and their respective version might enable hackers to exploit known vulnerabilities in your software, thus helping them to compromise your server.

  • Server protocol:

    Your HTTP-version supported by your server. While most servers nowadays support HTTP/1.1, there are still a number of servers running HTTP/1.0.

  • Server address:

    Your server's IP-address and port number.

  • Server location:

    Your server's geographical location. While there are several options to resolve it, we decided to use the free GeoLiteCity database, available from http://www.maxmind.com/, as this offers the advantage that the required data is available from a database located on our server. Depending on whether your IP is contained in the database it will show City + Country, Country only or "Unknown". For further information please refer to maxmind's website.

  • Keep-Alive:

    Tells you whether your server supports persistent connections or not. If Keep-Alive is enabled, the overhead for requesting URLs is greatly reduced, making the scanning-process considerably faster and more efficient, for more details please refer to RFC 2616 available form our download page. If X-Site-Scan reaches the maximum scanning time while crawling your website and Keep-Alive is disabled we strongly recommend to enable persistent connections, please refer to your server's documentation.

  • Total URLs requested:

    The total number of URLs X-Site-Scan requested while crawling you website. This includes all requests for internal URLs (HEAD and GET requests) as well as all requests for external links.

  • Internal URLs scanned:

    The number of internal URLs which X-Site-Scan could parse and therefore issued a GET request in order to download and scan the content.

  • External URLs found:

    The number of external URLs which X-Site-Scan found while crawling your webpage(s) and for which it issued HEAD requests in order to verify them.

    Note: There might be a difference between the number of total URLs requested and the sum of internal URLs scanned and external URLs found. This difference is the number of internal URLs pointing to content X-Site-Scan was unable to parse or for which the HEAD request returned an error code, you can check for possible problems using Site-Scan.

  • Data scanned:

    The amount of data X-Site-Scan processed while crawling your website. This value does not show the actual amount of data transmitted via the TCP/IP-connection, as it does not include HTTP-headers etc.

  • Time required:

    The time X-Site-Scan needed from issuing the first request until it received the response for the last request, either from an internal or external URL.

  • Average throughput:

    The average throughput during the scanning process in KBits/s. This value is depending on several different factors, like available bandwidth, number of scan-engines currently active, network routing etc, and therefore does not necessarily reflect the response time of your server. Furthermore, it will decrease considerably if your scan results show that most of the URLs requested were not of the content-type text/html, due to the required overhead in communication. Please also refer to the point "Keep-Alive".

Note: If the values for Total URL s requested or Time required are displayed in red, it means that X-Site-Scan reached the respective limit for this value and aborted the scan, therefore your scan results are most likely incomplete. Please refer to our news webpage to check for the limits currently applicable. If Keep-Alive is disabled and X-Site-Scan reaches the maximum scanning time before finishing the scan, we strongly recommend to change the settings of your server in order to permit persistent connections.

Scan-Options selected:

Displaying the actual options selected by you when requesting for the scan:

  • Allow traversing:

    Whether you enabled the traversing option when requesting the scan.

  • Include meta tags:

    Whether your website was scanned with or without obeying the robot meta tags found on your webpages. The option you selected before starting the scan is influencing the way X-Site-Scan is crawling your website, please refer to X-Site-Scan usage.

  • Include robots.txt:

    This section gives you detailed information whether you selected to include the rules set in your robots.txt file, whether X-Site-Scan found a robots.txt file on your server and which user-agent and corresponding rules were used for the scan.

    If X-Site-Scan was unable to locate the robots.txt file on your server, all further information will be hidden. If you deselected "Include robots.txt" before starting X-Site-Scan, no rules will be displayed. If X-Site-Scan found the robots.txt file and you selected to include it, it will show the user-agent it used for the scan and the corresponding rules it extracted. Please note that X-Site-Scan will only display the rules it is actually using, i.e. "Disallow", all other rules set by you will be ignored and are not displayed.

    Note: When parsing your robots.txt file, X-Site-Scan will first try to find the user-agent "X-Site-Scan", whereby the spelling is treated case-insensitive. If it cannot find it, it will look for the rules defined for user-agent "*", the anonymous user-agent. If these rules are not defined either, no rules will apply to the scan. For further information please refer to using Site-Scan, robots.txt file.

Detailed Scan-Results:

This section shows you the results for all external URLs X-Site-Scan found on your website. It contains the following information:

  • External URLs found:

    This column contains an alphabetically sorted list of all external URLs X-Site-Scan found while crawling your website. Currently the following colour codes are available for this value:

    Green: (The external URL is okay) A green colour code means that X-Site-Scan requested this URL and it returned a proper response code, e.g. 200 or 302. External URLs marked in green do not require any additional action.

    Orange: (The external URL should be double-checked) An orange colour code means that when X-Site-Scan requested for this URL some problems occurred. These problems might only be temporary, but we recommend that you double-check these URLs, e.g. by requesting them with your browser, but first please read the note below. Possible problems currently detected include:

    • Invalid URL: The URL for the external resource is malformed in a way and therefore no request was issued.

    • Host/Proxy Unresolved: X-Site-Scan was unable to resolve the IP-address for this URL. This might be caused because the name-server used by X-Site-Scan was currently unavailable, the domain-name is new and not fully propagated to all name-servers yet, because of a temporary connection problem or the URL is spelled wrongly. Please double-check the spelling of the URL, whether the domain still exists and is fully propagated.

    • Host Unreachable: The domain-name was successfully resolved into an IP-address, but X-Site-Scan was unable to connect to the server within the given timeframe. Possible reasons include temporary routing problems, the server in question is currently not running, too busy to reply or faced an internal error.

    • Request Timeout: X-Site-Scan resolved the domain-name and was able to connect to the server, but the server didn't reply within the given timeframe. Possible reasons are the same like for "Unreachable".

    • Invalid Response: The response header returned for the request was invalid, i.e. it was empty.

    Red: (The external URL should be updated) A red colour code means that the request for the external URL either returned a response code 404 (File-Not-Found) or 301 (Permanently Redirected). In the first case you should try to find out whether the resource in question is available under a new URL or remove your external link, in the second case you can use our HTTP response header tool to check for the new URL and update your links accordingly.

    Note: If your results indicate a problem for certain URLs, X-Site-Scan makes it very easy to locate these URLs in your website. Simply click on the external URL in question and X-Site-Scan will list down all your webpages containing links for this URL, making it easy for you to update them.

  • Status code:

    The HTTP response code returned when requesting this URL, for a detailed explanation of the HTTP response codes please refer to RFC 2616, available from our download page. In case a problem occurred while issuing the request the problem is stated in clear text, please refer to the section above.

  • Protocol:

    The protocol used by the server responding to the request, currently HTTP/1.0 or HTTP/1.1, perhaps we might add additional protocols in future.

  • Content:

    This column shows the content-type of the resource the external URL is pointing to, usually something like " text/html" or "text/javascript".

  • Last modified:

    If the response header issued for the request contains a "Last-Modified" header field X-Site-Scan will display it here. Perhaps you might want to know what are you linking to in case the content was updated or changed recently...

Note: In order to minimise the required bandwidth X-Site-Scan tries to verify the external URLs by issuing a HEAD request. Unfortunately quite a number of servers are configured to return invalid or incomplete HTTP response headers when receiving a HEAD request, while they will return proper response headers when replying to a GET request. Please bear this on mind when verifying your X-Site-Scan results with our HTTP response header tool or your web-browser.

Download Results

For your future reference you can also download a plain text file containing the results of the scan. It contains all external URLs, grouped according to the result for each request, together with the actual response code or error message. This file is also available in a compressed version. Please note that this file is the UNIX text format, should you have problems reading it with your text processor you can transform it into the Windows text format by using Treptos, a software tool available from our download page.

Server powered by
www.ipserverone.com
Scan-technology by M-Press Systems