Project

General

Profile

Getting Started » History » Version 144

« Previous - Version 144/167 (diff) - Next » - Current version
Vassilis Papavassiliou, 2016-02-16 07:45 PM


Getting Started

Once you build or download an ilsp-fc runnable jar, you can run it like this

java -jar ilsp-fc-X.Y.Z-jar-with-dependencies.jar

Examples of running monolingual crawls

  • Given a seed URL list ENV_EN_seeds.txt, the following example crawls the web for 5 minutes and constructs a collection containing English web pages.
java -Dlog4j.configuration=file:/opt/ilsp-fc/log4j.xml -jar ilsp-fc-X.Y.Z-jar-with-dependencies.jar \
-crawl -export -dedup -a test -f -type m -c 5 -lang en -k -u ENV_EN_seeds.txt -xslt -oxslt\
-dest crawlResults -of output-test-list.txt -ofh output-test-list.txt.html 

In this and other example commands in this documentation, a log4j.xml file is being used to set logging configuration details. An example log4j.xml file can be downloaded from here.

  • Given a seed URL list ENV_EN_seeds.txt and a topic definition for the Environment domain in Engish ENV_EN_topic.txt, the following example crawls the web for 10 cycles and constructs a collection containing English web pages related to this domain.
java -Dlog4j.configuration=file:/opt/ilsp-fc/log4j.xml -jar ilsp-fc-X.Y.Z-jar-with-dependencies.jar \
-crawl -export -dedup -a test1 -f -type m -n 10 -lang en -k -u seed-examples.txt -xslt -oxslt \
-tc ENV-EN-topic.txt -dom Environment -dest crawlResults -of output-test1-list.txt -ofh output-test1-list.txt.html

Example of running bilingual crawls

java -Dlog4j.configuration=file:/opt/ilsp-fc/log4j.xml -jar /opt/ilsp-fc/ilsp-fc-X.Y.Z-jar-with-dependencies.jar \
-crawl -export -dedup -pairdetect -align -tmxmerge -f -k -xslt -oxslt -type p -n 10 -t 20 -len 0 -mtlen 80 \
-lang "en;es" -doctypes "auidh" -segtypes "1:1" -a test -u ENV_EN_ES_seed.txt \
-dest "crawlResults" -of "output_xml_list.txt" -ofh "output_xml_list.html" \
-oft "output_tmx_list.tmx.txt" -ofth "output_tmx_list.tmx.html" -tmx "output.tmx" -metadata

Other settings

There are several settings that influence the crawling process and can be defined in a configuration file before the crawling process. The default configuration files for monolingual and bilingual crawls are FMC_config.xml and FBC_config.xml respectively. They are included in the ilsp-fc runnable jar.

Some of the settings can also be overriden using options of the ilsp-fc runnable jar, as follows:

-crawl : For applying crawling process.

-f : Forces the crawler to start a new job.

-type : The type of crawling. Crawling for monolingual (m) or parallel (p).

-lang : The language iso codes of the targeted languages separated by ";".

-cfg : The full path to a configuration file that can be used to override default parameters.

-a : User agent name. It is proposed to use a name similar to the targeted site in case of bilingual crawls.

-u : The fullpath of text file that contains the seed URLs that will initialize the crawler. In case of bilingual crawling the list should contain the URL of the main page of the targeted website, or (of course) other URLs of this website.

-filter : A regular expression to filter out URLs which do NOT match this regex.
The use of this filter forces the crawler to either focus on a specific web domain (i.e. ".ec.europa.eu."), or on a part of a web domain (e.g."./legislation_summaries/environment.") or in different web sites (i.e. in cases the translations are in two web sites e.g. http://www.nrcan.gc.ca and http://www.rncan.gc.ca). Note that if this filter is used, only the seed URLs that match this regex will be fetched.

-n : The crawl duration in cycles. Since the crawler runs in cycles (during which links stored at the top of the crawler’s frontier are extracted and new links are examined) it is proposed to use this parameter either for testing purposes or selecting a large number (i.e. 100) to "verify" that the crawler will visit the entire website.

-c : the crawl duration in minutes. Since the crawler runs in cycles (during which links stored at the top of the crawler’s frontier are extracted and new links are examined) it is very likely that the defined time will expire during a cycle run. Then, the crawler will stop only after the end of the running cycle.

-dest : The directory where the results (i.e. the crawled data) will be stored. The tool will create the file structure dest/agent/crawl-id (where dest and agent stand for the arguments of parameters dest and agent respectively and crawl-id is generated automatically). In this directory, the tool will create the "run" directories (i.e. directories containing all resources fetched/extracted/used/required for each cycle of this crawl). In addition a pdf directory for storing acquired pdf files will be created.

-t : The number of threads that will be used to fetch web pages in parallel.

-k : Forces the crawler to annotate boilerplate content in parsed text.

-len : Minimum number of tokens per paragraph. If the length (in terms of tokens) of a paragraph is
less than this value the paragraph will be annotated as "out of interest" and will not be included into the clean text of the web page.

-mtlen : Minimum number of tokens in cleaned document. If the length (in terms of tokens) of the cleaned text is less than this value, the document will not be stored.

-tc : The fullpath of topic file (a text file that contains a list of term triplets that describe the targeted topic). An example domain definition of "Environment" for the English-Spanish pair can be found at http://nlp.ilsp.gr/redmine/projects/ilsp-fc/wiki/ENV_EN_ES_topic. If omitted, the crawl will be a "general" one (i.e. module for text-to-domain classification will not be used).

-dom : Title of the targeted domain (required when domain definition, i.e. tc parameter, is used).

-storefilter: A regular expression to discard (i.e. visit/fetch/process but do not store) webpages with URLs which do NOT match this regex.

-d : Forces the crawler to stay in a web site (i.e. starts from a web site and extracts only links to pages inside the same web site). It should be used only for monolingual crawling.

-export : For exporting process

-of : The fullpath of text file containing a list with fullpaths of the exported cesDoc files.

-xslt : If exists, it inserts a stylesheet for rendering XML results as HTML.

-oxslt : If exists, Export crawl results with the help of an xslt file for better examination of results.

-ofh : The fullpath of HTML file containing a list of links pointing to HTML files (by XSL transformation of each XML) for easier browsing of the collection.

-u_r : url_replacements. Besides the default patterns , the user could add more patterns separated by ;

Running modules of the ILSP-FC

The ILSP-FC, in a configuration for acquiring parallel data, applies the following processes (one after the other):
* Crawl
* Export
* Near Deduplication
* Pair Detection
* Segment Alignment
* TMX Merging