Spider Node.js Reference Documentation

Spider

Current Version: 9.5.0.97

Chilkat Spider web crawler object.

Object Creation

var obj = new chilkat.Spider();

Properties

AbortCurrent
AbortCurrent
· boolean
Introduced in version 9.5.0.58

When set to true, causes the currently running method to abort. Methods that always finish quickly (i.e.have no length file operations or network communications) are not affected. If no method is running, then this property is automatically reset to false when the next method is called. When the abort occurs, this property is reset to false. Both synchronous and asynchronous method calls can be aborted. (A synchronous method call could be aborted by setting this property from a separate thread.)

top
AvoidHttps
AvoidHttps
· boolean

If set the 1 (true) the spider will avoid all HTTPS URLs. The default is 0 (false).

top
CacheDir
CacheDir
· string

Specifies a cache directory to use for spidering. If either of the FetchFromCache or UpdateCache properties are true, this is the location of the cache to be used. Note: the Internet Explorer, Netscape, and FireFox caches are completely separate from the Chilkat Spider cache directory. You should specify a new and empty directory.

More Information and Examples
top
ChopAtQuery
ChopAtQuery
· boolean

If equal to 1 (true), then the query portion of all URLs are automatically removed when adding to the unspidered list. The default value is 0 (false).

top
ConnectTimeout
ConnectTimeout
· integer

The maximum number of seconds to wait while connecting to an HTTP server.

top
DebugLogFilePath
DebugLogFilePath
· string

If set to a file path, causes each Chilkat method or property call to automatically append it's LastErrorText to the specified log file. The information is appended such that if a hang or crash occurs, it is possible to see the context in which the problem occurred, as well as a history of all Chilkat calls up to the point of the problem. The VerboseLogging property can be set to provide more detailed information.

This property is typically used for debugging the rare cases where a Chilkat method call hangs or generates an exception that halts program execution (i.e. crashes). A hang or crash should generally never happen. The typical causes of a hang are:

  1. a timeout related property was set to 0 to explicitly indicate that an infinite timeout is desired,
  2. the hang is actually a hang within an event callback (i.e. it is a hang within the application code), or
  3. there is an internal problem (bug) in the Chilkat code that causes the hang.

top
Domain
Domain
· string, read-only

The domain name that is being spidered. This is the domain previously set in the Initialize method.

top
FetchFromCache
FetchFromCache
· boolean

If equal to 1 (true) then pages are fetched from cache when possible. If 0, the cache is ignored. The default value is 1. Regardless, if no CacheDir is set then the cache is not used.

More Information and Examples
top
FinalRedirectUrl
FinalRedirectUrl
· string, read-only
Introduced in version 9.5.0.85

If the last URL crawled was redirected (as indicated by the WasRedirected property), this property will contain the final redirect URL.

top
LastErrorHtml
LastErrorHtml
· string, read-only

Provides information in HTML format about the last method/property called. If a method call returns a value indicating failure, or behaves unexpectedly, examine this property to get more information.

top
LastErrorText
LastErrorText
· string, read-only

Provides information in plain-text format about the last method/property called. If a method call returns a value indicating failure, or behaves unexpectedly, examine this property to get more information.

top
LastErrorXml
LastErrorXml
· string, read-only

Provides information in XML format about the last method/property called. If a method call returns a value indicating failure, or behaves unexpectedly, examine this property to get more information.

top
LastFromCache
LastFromCache
· boolean, read-only

Equal to 1 if the last page spidered was fetched from the cache. Otherwise equal to 0.

top
LastHtml
LastHtml
· string, read-only

The HTML text of the last paged fetched by the spider.

top
LastHtmlDescription
LastHtmlDescription
· string, read-only

The HTML META description from the last page fetched by the spider.

More Information and Examples
top
LastHtmlKeywords
LastHtmlKeywords
· string, read-only

The HTML META keywords from the last page fetched by the spider.

More Information and Examples
top
LastHtmlTitle
LastHtmlTitle
· string, read-only

The HTML title from the last page fetched by the spider.

More Information and Examples
top
LastMethodSuccess
LastMethodSuccess
· boolean

Indicate whether the last method call succeeded or failed. A value of true indicates success, a value of false indicates failure. This property is automatically set for method calls. It is not modified by property accesses. The property is automatically set to indicate success for the following types of method calls:

  • Any method that returns a string.
  • Any method returning a Chilkat object, binary bytes, or a date/time.
  • Any method returning a standard boolean status value where success = true and failure = false.
  • Any method returning an integer where failure is defined by a return value less than zero.

Note: Methods that do not fit the above requirements will always set this property equal to true. For example, a method that returns no value (such as a "void" in C++) will technically always succeed.

top
LastModDateStr
LastModDateStr
· string, read-only

The last modification date/time from the last page fetched by the spider.

top
LastUrl
LastUrl
· string, read-only

The URL of the last page spidered.

top
MaxResponseSize
MaxResponseSize
· integer

The maximum HTTP response size allowed. The spider will automatically fail any pages larger than this size. The default value is 250,000 bytes.

More Information and Examples
top
MaxUrlLen
MaxUrlLen
· integer

The maximum URL length allowed. URLs longer than this are not added to the unspidered list. The default value is 200.

More Information and Examples
top
NumAvoidPatterns
NumAvoidPatterns
· integer, read-only

The number of avoid patterns previously set by calling AddAvoidPattern.

top
NumFailed
NumFailed
· integer, read-only

The number of URLs in the object's failed URL list.

top
NumOutboundLinks
NumOutboundLinks
· integer, read-only

The number of URLs in the object's outbound links URL list.

top
NumSpidered
NumSpidered
· integer, read-only

The number of URLs in the object's already-spidered URL list.

top
NumUnspidered
NumUnspidered
· integer, read-only

The number of URLs in the object's unspidered URL list.

More Information and Examples
top
PreferIpv6
PreferIpv6
· boolean

If true, then use IPv6 over IPv4 when both are supported for a particular domain. The default value of this property is false, which will choose IPv4 over IPv6.

top
ProxyDomain
ProxyDomain
· string

The domain name of a proxy host if an HTTP proxy is used.

top
ProxyLogin
ProxyLogin
· string

If an HTTP proxy is used and it requires authentication, this property specifies the HTTP proxy login.

top
ProxyPassword
ProxyPassword
· string

If an HTTP proxy is used and it requires authentication, this property specifies the HTTP proxy password.

top
ProxyPort
ProxyPort
· integer

The port number of a proxy server if an HTTP proxy is used.

top
ReadTimeout
ReadTimeout
· integer

The maximum number of seconds to wait when reading from an HTTP server.

top
UpdateCache
UpdateCache
· boolean

If equal to 1 (true) then pages saved to the cache. If 0, the cache is ignored. The default value is 1. Regardless, if no CacheDir is set then the cache is not used.

More Information and Examples
top
UserAgent
UserAgent
· string

The value of the HTTP user-agent header field to be sent with HTTP requests. This can be set to any desired value, but be aware that some websites may reject requests from unknown user agents.

top
VerboseLogging
VerboseLogging
· boolean

If set to true, then the contents of LastErrorText (or LastErrorXml, or LastErrorHtml) may contain more verbose information. The default value is false. Verbose logging should only be used for debugging. The potentially large quantity of logged information may adversely affect peformance.

top
Version
Version
· string, read-only

Version of the component/library, such as "9.5.0.94"

More Information and Examples
top
WasRedirected
WasRedirected
· boolean, read-only
Introduced in version 9.5.0.85

Indicates whether the last URL crawled was redirected. (true = yes, false = no)

top
WindDownCount
WindDownCount
· integer

The "wind-down" phase begins when this number of URLs has been spidered. When in the wind-down phase, no new URLs are added to the unspidered list. The default value is 0 which means that there is NO wind-down phase.

top

Methods

AddAvoidOutboundLinkPattern
AddAvoidOutboundLinkPattern(pattern);
· Does not return anything (returns Undefined).
· pattern String

Adds a wildcarded pattern to prevent collecting matching outbound link URLs. For example, if "*google*" is added, then any outbound links containing the word "google" will be ignored. The "*" character matches zero or more of any character.

More Information and Examples
top
AddAvoidPattern
AddAvoidPattern(pattern);
· Does not return anything (returns Undefined).
· pattern String

Adds a wildcarded pattern to prevent spidering matching URLs. For example, if "*register*" is added, then any url containing the word "register" is not spidered. The "*" character matches zero or more of any character.

More Information and Examples
top
AddMustMatchPattern
AddMustMatchPattern(pattern);
· Does not return anything (returns Undefined).
· pattern String

Adds a wildcarded pattern to limit spidering to only URLs that match the pattern. For example, if "*/products/*" is added, then only URLs containing "/products/" are spidered. This is helpful for only spidering a portion of a website. The "*" character matches zero or more of any character.

More Information and Examples
top
AddUnspidered
AddUnspidered(url);
· Does not return anything (returns Undefined).
· url String

To begin spidering you must call this method one or more times to provide starting points. It adds a single URL to the object's internal queue of URLs to be spidered.

More Information and Examples
top
CanonicalizeUrl
var retStr = CanonicalizeUrl(url);
· Returns a String.
· url String

Canonicalizes a URL by doing the following:

  • Drops username/password if present.
  • Drops fragment if present.
  • Converts domain to lowercase.
  • Removes port 80 or 443
  • Remove default.asp, index.html, index.htm, default.html, index.htm, default.htm, index.php, index.asp, default.php, .cfm, .aspx, ,php3, .pl, .cgi, .txt, .shtml, .phtml
  • Remove www. from the domain if present.

Returns null on failure

More Information and Examples
top
ClearFailedUrls
ClearFailedUrls();
· Does not return anything (returns Undefined).

Clears the object's internal list of URLs that could not be downloaded.

top
ClearOutboundLinks
ClearOutboundLinks();
· Does not return anything (returns Undefined).

Clears the object's internal list of outbound URLs that will automatically accumulate while spidering.

top
ClearSpideredUrls
ClearSpideredUrls();
· Does not return anything (returns Undefined).

Clears the object's internal list of already-spidered URLs that will automatically accumulate while spidering.

top
CrawlNext
var retBool = CrawlNext();
· Returns a Boolean.

Crawls the next URL in the internal list of unspidered URLs. The URL is moved from the unspidered list to the spidered list. Any new links within the same domain and not yet spidered are added to the unspidered list. (providing that they do not match "avoid" patterns, etc.) Any new outbound links are added to the outbound URL list. If successful, the HTML of the downloaded page is available in the LastHtml property. If there are no more URLs left unspidered, the method returns false. Information about the URL crawled is available in the properties LastUrl, LastFromCache, and LastModDate.

More Information and Examples
top
CrawlNextAsync (1)
var ret_task = CrawlNextAsync();
· Returns a Task

Creates an asynchronous task to call the CrawlNext method with the arguments provided. (Async methods are available starting in Chilkat v9.5.0.52.)

Returns null on failure

top
FetchRobotsText
var retStr = FetchRobotsText();
· Returns a String.

Returns the contents of the robots.txt file from the domain being crawled. This spider object will not crawl URLs excluded by robots.txt. If you believe the spider is not behaving correctly, please notify us at support@chilkatsoft.com and provide information detailing a case that allows us to reproduce the problem.

Returns null on failure

More Information and Examples
top
FetchRobotsTextAsync (1)
var ret_task = FetchRobotsTextAsync();
· Returns a Task

Creates an asynchronous task to call the FetchRobotsText method with the arguments provided. (Async methods are available starting in Chilkat v9.5.0.52.)

Returns null on failure

top
GetAvoidPattern
var retStr = GetAvoidPattern(index);
· Returns a String.
· index Number

Returns the Nth avoid pattern previously added by calling AddAvoidPattern. Indexing begins at 0.

Returns null on failure

top
GetBaseDomain
var retStr = GetBaseDomain(domain);
· Returns a String.
· domain String

Returns the second-level + top-level domain of the domain. For example, if domain is "xyz.example.com", this returns "example.com". For some domains, such as "xyz.example.co.uk", the top 3 levels are returned, such as "example.co.uk".

Returns null on failure

More Information and Examples
top
GetFailedUrl
var retStr = GetFailedUrl(index);
· Returns a String.
· index Number

Returns the Nth URL in the failed URL list. Indexing begins at 0.

Returns null on failure

top
GetOutboundLink
var retStr = GetOutboundLink(index);
· Returns a String.
· index Number

Returns the Nth URL in the outbound link URL list. Indexing begins at 0.

Returns null on failure

top
GetSpideredUrl
var retStr = GetSpideredUrl(index);
· Returns a String.
· index Number

Returns the Nth URL in the already-spidered URL list. Indexing begins at 0.

Returns null on failure

top
GetUnspideredUrl
var retStr = GetUnspideredUrl(index);
· Returns a String.
· index Number

Returns the Nth URL in the unspidered URL list. Indexing begins at 0.

Returns null on failure

top
GetUrlDomain
var retStr = GetUrlDomain(url);
· Returns a String.
· url String

Returns the domain name part of a URL. For example, if the URL is "https://www.chilkatsoft.com/test.asp", then "www.chilkatsoft.com" is returned.

Returns null on failure

top
Initialize
Initialize(domain);
· Does not return anything (returns Undefined).
· domain String

Initializes the object to begin spidering a domain. Calling Initialize clears any patterns added via the AddAvoidOutboundLinkPattern, AddAvoidPattern, and AddMustMatchPattern methods. The domain name passed to this method is what is returned by the Domain property. The spider only crawls URLs within the same domain.

More Information and Examples
top
LoadTaskCaller
var status = LoadTaskCaller(task);
· Returns Boolean (true for success, false for failure).
· task Task
Introduced in version 9.5.0.80

Loads the caller of the task's async method.

Returns true for success, false for failure.

top
RecrawlLast
var retBool = RecrawlLast();
· Returns a Boolean.

Re-crawls the last URL spidered. This helpful when cookies set in a previous page load cause the page to be loaded differently the next time.

top
RecrawlLastAsync (1)
var ret_task = RecrawlLastAsync();
· Returns a Task

Creates an asynchronous task to call the RecrawlLast method with the arguments provided. (Async methods are available starting in Chilkat v9.5.0.52.)

Returns null on failure

top
SkipUnspidered
SkipUnspidered(index);
· Does not return anything (returns Undefined).
· index Number

Moves a URL from the unspidered list to the spidered list. This allows an application to skip a specific URL.

top
SleepMs
SleepMs(numMilliseconds);
· Does not return anything (returns Undefined).
· numMilliseconds Number

Suspends the execution of the current thread until the time-out interval elapses.

top