SemEval-3 : 6th International Workshop on Semantic Evaluations (Call for Task Proposals - Extended Deadline)

Abbreviated Title: 
SemEval-3
Call for Proposals
Submission Deadline: 
20 Nov 2010
Contact: 
Suresh Manandhar
Contact: 
Deniz Yuret
Contact Email: 
suresh [at] cs [dot] york [dot] ac [dot] uk
Contact Email: 
dyuret [at] ku [dot] edu [dot] tr

APOLOGIES FOR CROSS POSTING

SemEval-3
6th International Workshop on Semantic Evaluations

2nd Call for Task Proposals - Extended Deadline

The SemEval Programme committee invites proposals for tasks to be run as part of SemEval-3. We welcome tasks that can test an automatic system for semantic analysis of text, be it application dependent or independent. We especially welcome tasks for different languages and cross-lingual tasks.

For SemEval-3 we particularly encourage the following aspects in task design:

Reuse of existing annotations and training data

The previous SemEval and Senseval workshops have generated large collections of annotated data. In addition, rich semantically annotated datasets are readily available such as Ontonotes, ANC, FrameNet etc. To reduce the burden on task organisers we encourage reuse of existing resources. Where necessary task organisers should create new test datasets while reusing previous training datasets. We realise that this may not be always feasible and for novel tasks it may be necessary to create new annotations both for training and testing purposes.

Common data formats

To ensure that newer annotations conform to existing annotation standards, we encourage use of existing data encoding standards such as MASC and UIMA. Where possible reusing existing annotation standards and tools will make it easier to participate in multiple tasks. In addition, use of readily available tools should make it easier for participants to spot bugs and improve their systems.

Common texts and multiple annotations

For many tasks finding suitable texts for the task in itself can be a challenge or somewhat ad hoc. To make it easier for task organisers to find suitable texts we encourage use of resources such as Wikipedia, ANC and Ontonotes. Where this makes sense the SemEval program committee will encourage task organisers to share the same texts for different tasks. In due time, we hope that this process will allow generation of multiple semantic annotations for the same text.

Umbrella tasks

To reduce fragmentation of similar tasks, we will encourage task organisers to propose larger tasks that includes several subtasks. For example, Word Sense Induction in Japanese and Word Sense Induction in English could be combined into a single umbrella task that includes several subtasks. We welcome task proposals for such larger tasks. In addition, the program committee will actively encourage task organisers proposing similar tasks to combine their efforts into larger umbrella tasks.

Application oriented tasks

We will welcome tasks that are devoted to developing novel applications of computational semantics. As an analogy, the TREC Question-Answering (QA) track was solely devoted to building QA systems to compete with current IR systems. Similarly, we will encourage tasks that has a clearly defined end user application showcasing and enhancing our understanding of computational semantics and extending the current state-of-the-art.

Vote for SemEval 2012 or SemEval 2013 date

So far, SemEval has been organised in a 3 year cycle. However, many participants feel that this is a long wait. Many other shared tasks such as CoNLL and RTE run annually. For this reason we are giving the opportunity for task organisers to choose between a 2 year or a 3 year cycle. Task proposers will be asked to vote for the date of SemEval-3 and choose amongst:

2012 - ACL
2013 - NAACL
2013 - ACL

The votes for [2013 NAACL] and [2013 ACL] will be added up. Based on the poll either the 2012 or the 2013 option will be selected.

SUBMISSION DETAILS

Proposals for tasks will ideally contain:

* A description of the task (max 1 page)
* How the training/testing data will be built and/or procured
* The evaluation methodology to be used, including clear evaluation criteria
* The anticipated availability of necessary resources to the participants (copyright, etc)
* The resources required to prepare the task (computation and annotation time, etc)
* VOTE : Choose one amongst [ACL 2012], [NAACL 2013], [ACL 2013]

If you are not yet at a point to provide outlines of all of these, that is acceptable, but please give some thought to each, and present a sketch of your first ideas.

We will gladly give feedback. Please submit proposals as soon as possible, preferably by electronic mail in plain ASCII text to the SemEval-3 email address:

semeval@cs.york.ac.uk

IMPORTANT DATES

November 22, 2010 Submission deadline for outline task proposals
November 10-25, 2010 Notification of acceptance/feedback
November 30, 2010 Submission deadline for full task proposals (+ vote for 2012 vs 2013 SemEval-3)
December 20, 2010 Notification of acceptance

Please note that a full task proposal (by November 30) without a outline proposal (by October 25) is acceptable but strongly discouraged.

The original deadline of 25 October has been extended to allow more time for outline proposals.

TENTATIVE SCHEDULE (If 2012 date for SemEval-3 is voted for)

December 22, 2010 Preliminary Call for participation
May 15, 2011 Release of trial data
May 16, 2011 Call for participation
November 10, 2011 Full Training Data and start of evaluation period
April 1, 2011 End of Evaluation Period (almost 5 months)
April 7, 2011 Paper submission deadline
April 15, 2011 Notification of acceptance
April 22, 2011 Camera ready papers due
Summer 2011 Workshop co-located with ACL

CHAIRS

Suresh Manandhar, University of York, UK
Deniz Yuret, KoƧ University, Turkey

SemEval-3 WEBSITE

http://www.cs.york.ac.uk/semeval

SemEval-3 Wiki

http://aclweb.org/aclwiki/index.php?title=SemEval_3 (Under construction)