Call for papers: Workshop on Web Scale Knowledge Extraction

Abbreviated Title: 
WEKEX2011
Call for Papers
Submission Deadline: 
15 Aug 2011
Event Dates: 
23 Oct 2011 - 27 Oct 2011
Location: 
ISWC 2011, Bonn, Germany
City: 
Bonn
Country: 
Germany
Contact: 
James Fan
Contact: 
Aditya Kalyanpur
Contact Email: 
fanj [at] us [dot] ibm [dot] com
Contact Email: 
adityakal [at] us [dot] ibm [dot] com

Recently, there has been a significant amount of interest in automatically creating large-scale knowledge bases from unstructured text. Compared to traditional, manually created representations, these knowledge bases have the advantage of scale and coverage. They often contain tens of millions of propositions, represented using a variety of encodings, from simple binary assertions to more complicated frame-like structures, and are extracted by parsing and analyzing large text corpora.

Also, unlike handcrafted KBs, which are limited to a handful of types and relations defined for a particular domain, mined KBs mostly contain propositions which are linguistically inspired, and as a result span a wide range of topics. Such proposition stores have been used in several NLP applications including word sense disambiguation, knowledge enrichment and question answering.

This workshop is designed to gather researchers in the area of building and applying textually mined knowledge bases and to discuss key related issues.

Call for Papers

Papers related to one or more of the following topics are welcome (not limited to these topics):

Extraction Challenges
* How to extract knowledge from a large amount of text effectively
* How to deal with multiple sources, especially with different granularity, reliability
* How to deal with incorrect or noisy data
* How to deal with the problem of entity disambiguation and identification in text
* How to evaluate and compare with different extraction results
Representation and Storage Issues
* How to represent the extracted information, e.g. as assertions, rules, frames
* How to store and index the information for quick and flexible retrieval
* How to capture and represent the knowledge probabilistically
* How to model the context from which the extraction comes
* How to represent modal (beliefs/opinions etc) and incomplete knowledge
Relationship with Formal KR&R and Semantic Web
* How to combine/link textually mined knowledge to Semantic Web data (ontologies)
* How to combine fuzzy inference over mined data with formal logic based inference
Applications of Textually Mined Knowledge Stores
* For Textual paraphrasing and entailment
* For Co-reference resolution and entity/relation identification
* Macro reading
* In Question Answering