The task performed by the service

Alignment Text alignment is the task of finding the correct correspondence between locations of fragments in a given text, and its translation
Chunking: Segmentation Chunking is the analysis of a sentence which identifies the constituents parts (noun groups, verbs, verb groups, etc.)
Corpus Processing Any task that deals with analysis and exploitation of textual corpus.
Corpus Workbench A suite of tools to manage corpus analysis.
Format Conversion Any task performing a change of format.
Lexicon-Terminology Extraction Extracting terminology is the process of extracting terminology from a text.
Management Any task that deals with management of Web services.
Morphological Tagging Morphological tagging is a process of labeling words in a text with their appropriate detailed morphological information.
Morphosyntactic Tagging Morphological tagging is a process of labeling words in a text with their appropriate detailed morphosyntactic information.
Named Entity Recognition Named Entity Recognition (NER) classifies elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, etc.
Querying Refers to the task of searching (possibly via rgulars expressions) occurrences of a word or patterns in a corpus (the concordances).
Statistics Analysis Statistical analysis refers to a collection of methods used to process large amounts of data for the interpretation of quantitative data.
Stemming Lemmatization Is the process for reducing inflected (or sometimes derived) words to their stem or lemma.
Syntactic Tagging Is the process of annotating syntacitc information in texts.
Text Mining Refers to the process of deriving high-quality information from text.
Text Similarity Text similarity is the task of comparing text segments based on the number of common words or shared information in paragraphs or sentences.
Tokenization Tokenization is the process of breaking a stream of text up into words, phrases, symbols, or other meaningful elements called tokens.
Data Anonymization Data anonymization is the process of removing personally identifiable information from data sets and/or to scramble the lines in a file in order to make it difficult to reproduce the original texts.
Dependency parsing Dependency parsing is the task of parsing or analysing texts in order to identify dependency relations
Text Handling Text handling is the task to perform (basic) text transformations in input data.
Lexicon look up Lexicon look up is the task to automatically perform searches in a lexicon.
Question answering Question Answering (QA) is a computer science discipline which is concerned with building systems that automatically answer questions posed by humans in a natural language.
Spelling CHecker Spell checking is the task of automatically correct spelling errors i texts.
Machine translation Machine translation (MT) is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one natural language to another (Wikipedia).
Geovisualization Geovisualization, short for Geographic Visualization, refers to a set of tools and techniques supporting geospatial data analysis through the use of interactive visualization. It emphasizes knowledge construction over knowledge storage or information transmission. To do this, geovisualization communicates geospatial information in ways that, when combined with human understanding, allow for data exploration and decision-making processes [Wikipedia]