Getting Started » History » Version 78
Version 77 (Vassilis Papavassiliou, 2014-08-12 11:38 AM) → Version 78/167 (Vassilis Papavassiliou, 2014-08-12 11:46 AM)
h1. Getting Started
Once you [[DeveloperSetup|build]] or [[HowToGet|download]] an ilsp-fc runnable jar, you can run it like this
<pre><code>java -jar ilsp-fc-X.Y.Z-jar-with-dependencies.jar</code></pre>
In case of monolingual crawls the The required input from the user is:
* a list of seed URLs pointing to relevant web pages. An example seed URL list for _Environment_ in English can be found at [[ENV_EN_seeds.txt]].
In case of focused crawls (i.e. the crawler aims to visit/process/store web pages that are related to a targeted domain), the input should also include: includes:
* a list of term triplets (_<relevance,term,subtopic>_) that describe a domain (i.e. this list is required in case the user aims to acquire domain-specific documents) and, optionally, subcategories of this domain. An example domain definition can be found at [[ENV_EN_topic.txt]] for the _Environment_ domain in English.
In case of bilingual crawling, the input from the user includes:
* a seed URL list which should contain URL(s) from only one web site (e.g. [[ENV_EN_ES_seed.txt]]). The crawler will follow fioolow only links pointing to pages inside this web site. However, the user could use the <code> filter </code> parameter (see below) to allow visiting only links pointing to pages either inside versions of the top domain of the URL (e.g. http://www.fifa.com/, http://es.fifa.com/ , etc.) or in different web sites (i.e. in cases the translations are in two web sites e.g. http://www.nrcan.gc.ca and http://www.rncan.gc.ca).
In case of focused crawls, the input should also incluce:
* a list of term triplets (_<relevance,term,subtopic>_) that describe a domain (i.e. this list is required in case the user aims to acquire domain-specific documents) and, optionally, subcategories of this domain in both the targeted languages (i.e. the union of the domain definition in each language). An example domain definition for the English-Spanish pair can be found at [[ENV_EN_ES_topic.txt]].
For both monolingual and bilingual crawling, the set of currently supported languages comprises de, el, en, es, fr, hr, it, ja, and pt.
There are several settings that influence the crawling process and can be defined in a configuration file before the crawling process. The default configuration file is [[crawler_config.xml]] and is included in the ilsp-fc runnable jar. Two typical customized examples are [[FMC_config.xml]] for monolingual crawls and [[FBC_config.xml]] for bilingual crawls.
Some of the settings can also be overriden using options of the ilsp-fc runnable jar, as follows:
<pre><code>-a : user agent name (required)
-type : the type of crawling. Crawling for monolingual (m) or parallel (p).
-cfg : the configuration file that will be used instead of the default (see crawler_config.xml above).
-c : the crawl duration in minutes. Since the crawler runs in cycles (during which links stored at the top of
the crawler’s frontier are extracted and new links are examined) it is very likely that the defined time
will expire during a cycle run. Then, the crawler will stop only after the end of the running cycle.
The default value is 10 minutes.
-n : the crawl duration in cycles. The default is 1. It is proposed to use this parameter for testing purposes.
-t : the number of threads that will be used to fetch web pages in parallel.
-f : Forces the crawler to start a new job (required).
-lang : the targeted language in case of monolingual crawling (required).
-l1 : the first targeted language in case of bilingual crawling (required).
-l2 : the second targeted language in case of bilingual crawling (required).
-u : the text file that contains the seed URLs that will initialize the crawler. In case of bilingual crawling
the list should contain only 1 or 2 URLs from the same web doamin.
-tc : domain definition (a text file that contains a list of term triplets that describe the targeted
domain). If omitted, the crawl will be a "general" one (i.e. module for text-to-domain
classification will not be used).
-k : Forces the crawler to annotate boilerplate content in parsed text.
-filter : A regular expression to filter out URLs which do NOT match this regex.
The use of this filter forces the crawler to either focus on a specific
web domain (i.e. ".*ec.europa.eu.*"), or on a part of a web domain
(e.g.".*/legislation_summaries/environment.*"). Note that if this filter
is used, only the seed URLs that match this regex will be fetched.
-u_r : This parameter should be used for bilingual craqwling when there is an already known pattern in URLs
which implies that one page is the candidate translation the other. It includes the two strings
to be replaced separated by ';'.
-d : Forces the crawler to stay in a web site (i.e. starts from a web site and extracts only links to pages
inside the same web site). It should be used only for monolingual crawling.
-mtlen: Minimum number of tokens in cleaned document. If the length (in terms of tokens) of the cleaned
text is less the thjis value, the document will not be stored.
-xslt : Insert a stylesheet for rendering xml results as html.
-oxslt: Export crawl results with the help of an xslt file for better examination of results.
-dom: Title of the targeted domain (required when domain definition, i.e. tc parameter, is used).
-dest: The directory where the results (i.e. the crawled data) will be stored.
-of: A text file containing a list with the exported XML files (see section Output below).
-ofh: An HTML file containing a list with the generated XML files (see section Output below).
</code></pre>
h2. Run a monolingual crawl
<pre><code>java -jar ilsp-fc-X.Y.Z-jar-with-dependencies.jar crawlandexport -a vpapa@ilsp.gr \
-cfg FMC_config.xml -t 10 -type m -c 10 -lang de -of output_test1_list.txt \
-ofh output_test1_list.txt.html -tc Automotive-seed-terms-de.txt \
-u Automotive-seed-urls.txt -f -k -dom Automotive</code></pre>
<pre><code>java -jar ilsp-fc-X.Y.Z-jar-with-dependencies.jar crawlandexport -a test2 \
-t 10 -f -k -type m -c 5 -lang es -of output_test2_list.txt \
-ofh output_test2_list.txt.html -u Automotive-seed-urls.txt \
</code></pre>
h2. Run a bilingual crawl
<pre><code>java -jar ilsp-fc-X.Y.Z-jar-with-dependencies.jar crawlandexport -a test3 -c 10 -f -k -l1 de -l2 it \
-t 10 -of test_HS_DE-IT_output.txt -ofh test_HS_DE-IT_output.txt.html -tc HS_DE-IT_topic.txt \
-type p -u seed_suva.txt -cfg FBC_config.xml -dom HS</code></pre>
<pre><code>java -jar ilsp-fc-X.Y.Z-jar-with-dependencies.jar crawlandexport -a test4 -c 11 -f -k -l1 es -l2 pt \
-t 10 -of test_F_ES-PT_output.txt -ofh test_F_ES-PT_output.txt.html \
-type p -u seed_uefa.txt -filter ".*uefa.com.*" </code></pre>
h2. Output
The output of the ilsp-fc in the case of a monolingual crawl is a list of links pointing to XML files following the cesDOC Corpus Encoding Standard (http://www.xces.org/). See [[cesDOC_file]] for an example in French for the Environment domain.
The output of the ilsp-fc in the case of a bilingual crawl is a list of links to XML files following the cesAling Corpus Encoding Standard for linking cesDoc documents. This example [[cesAlign_file]] serves as a link between a pair of cesDOC documents in English and Greek.
Once you [[DeveloperSetup|build]] or [[HowToGet|download]] an ilsp-fc runnable jar, you can run it like this
<pre><code>java -jar ilsp-fc-X.Y.Z-jar-with-dependencies.jar</code></pre>
In case of monolingual crawls the The required input from the user is:
* a list of seed URLs pointing to relevant web pages. An example seed URL list for _Environment_ in English can be found at [[ENV_EN_seeds.txt]].
In case of focused crawls (i.e. the crawler aims to visit/process/store web pages that are related to a targeted domain), the input should also include: includes:
* a list of term triplets (_<relevance,term,subtopic>_) that describe a domain (i.e. this list is required in case the user aims to acquire domain-specific documents) and, optionally, subcategories of this domain. An example domain definition can be found at [[ENV_EN_topic.txt]] for the _Environment_ domain in English.
In case of bilingual crawling, the input from the user includes:
* a seed URL list which should contain URL(s) from only one web site (e.g. [[ENV_EN_ES_seed.txt]]). The crawler will follow fioolow only links pointing to pages inside this web site. However, the user could use the <code> filter </code> parameter (see below) to allow visiting only links pointing to pages either inside versions of the top domain of the URL (e.g. http://www.fifa.com/, http://es.fifa.com/ , etc.) or in different web sites (i.e. in cases the translations are in two web sites e.g. http://www.nrcan.gc.ca and http://www.rncan.gc.ca).
In case of focused crawls, the input should also incluce:
* a list of term triplets (_<relevance,term,subtopic>_) that describe a domain (i.e. this list is required in case the user aims to acquire domain-specific documents) and, optionally, subcategories of this domain in both the targeted languages (i.e. the union of the domain definition in each language). An example domain definition for the English-Spanish pair can be found at [[ENV_EN_ES_topic.txt]].
For both monolingual and bilingual crawling, the set of currently supported languages comprises de, el, en, es, fr, hr, it, ja, and pt.
There are several settings that influence the crawling process and can be defined in a configuration file before the crawling process. The default configuration file is [[crawler_config.xml]] and is included in the ilsp-fc runnable jar. Two typical customized examples are [[FMC_config.xml]] for monolingual crawls and [[FBC_config.xml]] for bilingual crawls.
Some of the settings can also be overriden using options of the ilsp-fc runnable jar, as follows:
<pre><code>-a : user agent name (required)
-type : the type of crawling. Crawling for monolingual (m) or parallel (p).
-cfg : the configuration file that will be used instead of the default (see crawler_config.xml above).
-c : the crawl duration in minutes. Since the crawler runs in cycles (during which links stored at the top of
the crawler’s frontier are extracted and new links are examined) it is very likely that the defined time
will expire during a cycle run. Then, the crawler will stop only after the end of the running cycle.
The default value is 10 minutes.
-n : the crawl duration in cycles. The default is 1. It is proposed to use this parameter for testing purposes.
-t : the number of threads that will be used to fetch web pages in parallel.
-f : Forces the crawler to start a new job (required).
-lang : the targeted language in case of monolingual crawling (required).
-l1 : the first targeted language in case of bilingual crawling (required).
-l2 : the second targeted language in case of bilingual crawling (required).
-u : the text file that contains the seed URLs that will initialize the crawler. In case of bilingual crawling
the list should contain only 1 or 2 URLs from the same web doamin.
-tc : domain definition (a text file that contains a list of term triplets that describe the targeted
domain). If omitted, the crawl will be a "general" one (i.e. module for text-to-domain
classification will not be used).
-k : Forces the crawler to annotate boilerplate content in parsed text.
-filter : A regular expression to filter out URLs which do NOT match this regex.
The use of this filter forces the crawler to either focus on a specific
web domain (i.e. ".*ec.europa.eu.*"), or on a part of a web domain
(e.g.".*/legislation_summaries/environment.*"). Note that if this filter
is used, only the seed URLs that match this regex will be fetched.
-u_r : This parameter should be used for bilingual craqwling when there is an already known pattern in URLs
which implies that one page is the candidate translation the other. It includes the two strings
to be replaced separated by ';'.
-d : Forces the crawler to stay in a web site (i.e. starts from a web site and extracts only links to pages
inside the same web site). It should be used only for monolingual crawling.
-mtlen: Minimum number of tokens in cleaned document. If the length (in terms of tokens) of the cleaned
text is less the thjis value, the document will not be stored.
-xslt : Insert a stylesheet for rendering xml results as html.
-oxslt: Export crawl results with the help of an xslt file for better examination of results.
-dom: Title of the targeted domain (required when domain definition, i.e. tc parameter, is used).
-dest: The directory where the results (i.e. the crawled data) will be stored.
-of: A text file containing a list with the exported XML files (see section Output below).
-ofh: An HTML file containing a list with the generated XML files (see section Output below).
</code></pre>
h2. Run a monolingual crawl
<pre><code>java -jar ilsp-fc-X.Y.Z-jar-with-dependencies.jar crawlandexport -a vpapa@ilsp.gr \
-cfg FMC_config.xml -t 10 -type m -c 10 -lang de -of output_test1_list.txt \
-ofh output_test1_list.txt.html -tc Automotive-seed-terms-de.txt \
-u Automotive-seed-urls.txt -f -k -dom Automotive</code></pre>
<pre><code>java -jar ilsp-fc-X.Y.Z-jar-with-dependencies.jar crawlandexport -a test2 \
-t 10 -f -k -type m -c 5 -lang es -of output_test2_list.txt \
-ofh output_test2_list.txt.html -u Automotive-seed-urls.txt \
</code></pre>
h2. Run a bilingual crawl
<pre><code>java -jar ilsp-fc-X.Y.Z-jar-with-dependencies.jar crawlandexport -a test3 -c 10 -f -k -l1 de -l2 it \
-t 10 -of test_HS_DE-IT_output.txt -ofh test_HS_DE-IT_output.txt.html -tc HS_DE-IT_topic.txt \
-type p -u seed_suva.txt -cfg FBC_config.xml -dom HS</code></pre>
<pre><code>java -jar ilsp-fc-X.Y.Z-jar-with-dependencies.jar crawlandexport -a test4 -c 11 -f -k -l1 es -l2 pt \
-t 10 -of test_F_ES-PT_output.txt -ofh test_F_ES-PT_output.txt.html \
-type p -u seed_uefa.txt -filter ".*uefa.com.*" </code></pre>
h2. Output
The output of the ilsp-fc in the case of a monolingual crawl is a list of links pointing to XML files following the cesDOC Corpus Encoding Standard (http://www.xces.org/). See [[cesDOC_file]] for an example in French for the Environment domain.
The output of the ilsp-fc in the case of a bilingual crawl is a list of links to XML files following the cesAling Corpus Encoding Standard for linking cesDoc documents. This example [[cesAlign_file]] serves as a link between a pair of cesDOC documents in English and Greek.