Understanding Conferences |
D U C 2 0 0 7: Task, Documents, and MeasuresThe Document Understanding Conference (DUC) is a series of summarization evaluations that have been conducted by the National Institute of Standards and Technology (NIST) since 2001. Its goal is to further progress in automatic text summarization and enable researchers to participate in large-scale experiments in both the development and evaluation of summarization systems. DUC 2007 will consist of two tasks. The tasks are independent, and participants in DUC 2007 may choose to do one or both tasks:
The main task is the same as the DUC 2006 task and will model real-world complex question answering, in which a question cannot be answered by simply stating a name, date, quantity, etc. Given a topic and a set of 25 relevant documents, the task is to synthesize a fluent, well-organized 250-word summary of the documents that answers the question(s) in the topic statement. Successful performance on the task will benefit from a combination of IR and NLP capabilities, including passage retrieval, compression, and generation of fluent text. The update task will be to produce short (~100 words) multi-document update summaries of newswire articles under the assumption that the user has already read a set of earlier articles. The purpose of each update summary will be to inform the reader of new information about a particular topic. Documents for summarizationThe documents for summarization will come from the AQUAINT corpus, comprising newswire articles from the Associated Press and New York Times (1998-2000) and Xinhua News Agency (1996-2000). The corpus has the following DTD: NIST assessors will develop topics of interest to them. The assessor will create a topic and choose a set of 25 documents relevant to the topic. These documents will form the document cluster for that topic. Topics and document clusters will be distributed by NIST. Only DUC 2007 participants who have completed all required forms will be allowed access. Main TaskReference summariesEach topic and its document cluster will be given to 4 different NIST assessors, including the developer of the topic. The assessor will create a ~250-word summary of the document cluster that satisfies the information need expressed in the topic statement. These multiple references summaries will be used in the evaluation of summary content. System taskSystem task: Given a DUC topic and a set of 25 relevant documents, create from the documents a brief, well-organized, fluent summary which answers the need for information expressed in the topic statement. All processing of documents and generation of summaries must be automatic. The summary can be no longer than 250 words (whitespace-delimited tokens). Summaries over the size limit will be truncated. No bonus will be given for creating a shorter summary. No specific formatting other than linear is allowed. There will be 45 topics in the test data. Each group can submit one set of results, i.e., one summary for each topic/cluster. Participating groups should be able to evaluate additional results themselves using ISI's ROUGE/BE package. EvaluationAll summaries will first be truncated to 250 words. Where sentences need to be identified for automatic evaluation, NIST will then use a simple Perl script for sentence segmentation. Update Task (Pilot)The update summary pilot task will be to create short (100-word) multi-document summaries under the assumption that the reader has already read a number of previous documents. The topics and documents for the update pilot will be a subset of those for the main DUC task. There will be approximately 10 topics in the test data, with 25 documents per topic. For each topic, the documents will be ordered chronologically and then partitioned into 3 sets, A-C, where the time stamps on all the documents in each set are ordered such that time(A) < time(B) < time(C). There will be approximately 10 documents in Set A, 8 in Set B, and 7 in Set C.Reference SummariesInstructions given to NIST assessors for writing update summaries. Each topic and its 3 document clusters, A-C, will be given to 4 different NIST assessors. The assessor will create 3 100-word topic-focused summaries that contribute to satisfying the information need expressed in the topic statement: System TaskSystem task: Given a DUC topic and its 3 document clusters, A-C, create from the documents three brief, fluent summaries that contribute to satisfying the information need expressed in the topic statement:EvaluationAll summaries will first be truncated to 100 words. Where sentences need to be identified for automatic evaluation, NIST will then use a simple Perl script for sentence segmentation. Tools for DUC 2007DUC Workshop Papers and PresentationsEach participant in the system task should submit a paper describing their system architecture, results, and analysis; these papers will be published in the DUC 2007 Workshop Proceedings. Participants who would like to give an oral presentation of their paper at the workshop should submit a presentation proposal by March 21, 2007, and the Program Committee will select the groups who will present at the workshop. |
For
data, past results, mailing list or other general information
contact:
Lori
Buckland (lori.buckland@nist.gov)
For
other questions contact: Hoa
Dang (hoa.dang AT nist.gov)
Last
updated: Thursday, 24-Mar-2011 17:40:47 UTC
Date
created: Wednesday, 18-October-06