GIVE-2.5 - Generating Instructions in Virtual Environments - Last Call for System Submissions

Abbreviated Title: 
GIVE-2.5 - Last Call for System Submissions
Submission Deadline: 
23 May 2011
Event Dates: 
28 Sep 2011 - 30 Sep 2011
Kristina Striegnitz
Contact Email: 
striegnk [at] union [dot] edu

GIVE-2.5: Last Call for Participation/System Submissions

Part of Generation Challenges 2011
Endorsed by SIGGEN and SIGSEM

We invite you to participate in the next edition of the Challenges on Generating Instructions in Virtual Environments (GIVE) by submitting an NLG system for the GIVE scenario. In this scenario, a human user performs a "treasure hunt" task in a virtual 3D environment. The NLG system's job is to generate, in real time, a sequence of natural-language instructions that will help the user perform this task.

GIVE-1 was organized in 2008-09. We evaluated five natural language generation systems using almost 1200 user interactions. GIVE-2, run in 2009-10, evaluated seven systems on more than 1800 user interactions. The public evaluation period for GIVE-2.5 will be in June and July 2011. The task in GIVE-2.5 will be basically identical to the one in GIVE-2; this is so GIVE-2 systems can be improved based on experiences from the evaluation, and to allow more people to participate in the GIVE-2 task. For more information and to try out the GIVE-2 software, see

If you are interested in participating, please email Kristina Striegnitz at so that we can send you further information and keep you updated on any new developments.

Overview of the GIVE challenges

The Challenge on Generating Instructions in Virtual Environments (GIVE) is a novel approach to the notoriously hard problem of evaluating NLG systems. In this scenario, a human user performs a "treasure hunt" task in a virtual 3D environment. The NLG system's job is to generate, in real time, a sequence of natural-language instructions that will help the user perform this task. The crucial thing is that users connect to the generation systems over the Internet. By logging how well they were able to follow the system's instructions, we can evaluate the quality of these instructions in
terms of task completion rates and times, subjective measures such as helpfulness and friendliness, and runtime performance. Because the user and the system don't need to be physically in the same place, access to experimental subjects over the Internet becomes easy.

GIVE is a theory-neutral, end-to-end evaluation effort for NLG systems. It involves research opportunities in text planning, sentence planning, realization, and situated communication. One particularly interesting aspect of situating the generation problem in a virtual environment is that spatial and relational expressions play a bigger role than in other NLG tasks. Beyond NLG, GIVE can be interesting as a testbed for improving the NLG components of dialogue systems, and for computational semanticists working on spatial language.

GIVE-1 and GIVE-2

In the GIVE-1 Challenge (2008-09) five NLG systems were evaluated using data from almost 1200 game runs. To our knowledge, this made GIVE-1 the largest ever NLG evaluation effort in terms of the number of experimental subjects. We presented the results of the evaluation at the ENLG Workshop, and have verified that these results are consistent with (but more detailed than) the results that could be obtained from a traditional lab-based evaluation.

In GIVE-2 seven systems were evaluated by more than 1800 users from 39 countries. The results were presented at INLG 2010 in Ireland. The main novelty in GIVE-2 is that where GIVE-1 used discrete worlds (which were based on square tiles, and the user could only jump from the center of one tile to the center of the next, and turn in 90 degree steps), GIVE-2 permits free, continuous movements in the worlds. This makes the generation task more challenging because simple instructions of the form "walk three steps forward" are no longer possible.

Anyone is invited to submit an NLG system to participate in the GIVE-2.5 Challenge; this includes contributions from students and student teams. To get an idea of what this involves, you may want to go to the GIVE website mentioned above and take a look at our EACL 2009 demo paper describing the software architecture, or download the GIVE-2 software and look at it in more detail.


GIVE-2.5 will use the same software as GIVE-2. Therefore, development for GIVE-2.5 can begin immediately. The systems have to be ready by the end of May. At the end of May, we will have a one week internal testing period, during which we ensure that all systems satisfy some minimal quality standards. After the internal testing we will distribute the evaluation worlds and the participants will have one more week to respond to our comments and to make final adaptations of their system to the evaluation worlds. This will be followed by the public evaluation until the end of July. The results will be presented as part of Generation Challenges 2011 at ENLG 2011 in Nancy, France, Sep. 28-30, 2011, and non-peer-reviewed papers describing each participating system will be included in the ENLG proceedings. Participants are encouraged to also submit more in depth papers about their approach to ENLG or other peer-reviewed publications through their normal submission procedures.

Important dates for participants

May 23: systems running for internal testing
June 3: notification of acceptance
June 8: final systems running, start of public evaluation
July 1 & Aug. 1: snapshots of evaluation results made available
Sep. 1: camera ready system descriptions due
Sep. 28-30: presentation at Generation Challenges 2011

GIVE 2.5 Organizing committee

Alexandre Denis, Loria
Andrew Gargett, United Arab Emirates University
Konstantina Garoufi, Saarland University
Alexander Koller, University of Potsdam
Kristina Striegnitz, Union College
Mariet Theune, University of Twente

GIVE steering commitee

Donna Byron, Northeastern University
Justine Cassell, Carnegie Mellon University
Robert Dale, Macquarie University
Alexander Koller, University of Potsdam
Johanna Moore, University of Edinburgh
Jon Oberlander, University of Edinburgh
Kristina Striegnitz, Union College