The Natural Language and Computational Linguistics communities have traditionally faced different problems with specific approaches and mostly in an isolated manner or in a pipeline way. The former approaches focus on solving one particular aspect of the Natural Language Processing without considering other problems, very easily ending up in incoherent solutions. Pipeline approaches tackle one problem at a time in a sequence of sub-problems, where the output of one step is the input of the next step. These methods suffer from error propagation, tend to be too deterministic (one decision can not be changed later) and lead to sub–optimal solutions.
Another aspect that seems to be not fully considered is the role of the context. For example, WSD systems usually restrict the context of a word to a very narrow window of tokens around the target word, usually not bigger than the sentence in which the token occurs. This is clearly not enough in some cases where the clues for getting the proper meaning of the word are to be found in another part of the document or even outside of this document (background information).
These issues are directly derived from the way that Natural Language Processing has been considered and the way in which NLP applications have been developed. These applications are framed mostly within computer science frameworks, in which it is relatively easy to define a specific task and an optimal expected output, but this is not so trivial in NLP. We propose to see Natural Language Processing as a big puzzle. The different tasks are small pieces that must fit perfectly in order to build an overall puzzle that represents the interpretation of a document or a text. Following the puzzle analogy, the pieces can not be considered in isolation. Moreover, sometimes external information is required to complete the puzzle, as for example knowing what is depicted in the puzzle to get clues about how to put the pieces together.
Hence, the scope of the work is bringing together approaches that consider in different ways the hypotheses presented previously. For instance, approaches trying to solve several NLP task at the same time and mutually using the information among the specific subtasks to reach a good overall solution. Other interesting research would be using external knowledge resources (such as DBpedia, Wikipedia or the Web), in order to extract background and real–world information that could be used to understand texts and solve NLP problems.
This workshop has not been organized previously, but we think it deals with very relevant topics, which are being currently faced in a large range of NLP fields. It targets anybody working on Computational Linguistics and Natural Language applications and concerned with the ideas and approaches presented here.
For a list of topics of interest for the workshop, check out the section Topics of Interest.