Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.5555/1072399dlproceedingsBook PagePublication PagesmucConference Proceedingsconference-collections
MUC6 '95: Proceedings of the 6th conference on Message understanding
1995 Proceeding
Publisher:
  • Association for Computational Linguistics
  • N. Eight Street, Stroudsburg, PA, 18360
  • United States
Conference:
Columbia Maryland November 6 - 8, 1995
ISBN:
978-1-55860-402-5
Published:
06 November 1995

Reflects downloads up to 24 Dec 2024Bibliometrics
Skip Abstract Section
Abstract

This volume documents the proceedings of the Sixth Message Understanding Conference (MUC-6), which was held on 6-8 November, 1995, in Columbia, Maryland. The conference was sponsored by the Defense Advanced Research Projects Agency, Information Technology Office (D. Gunning, Program Manager) under the auspices of the Tipster Text Program and was organized by the MUC-6 program committee, co-chaired by Beth Sundheim (NCCOSC/NRaD) and Ralph Grishman (NYU). Other members of the committee were Chinatsu Aone (SRA Corp.), Lois Childs (Lockheed Martin Corp.), Nancy Chinchor (SAIC), Jerry Hobbs (SRI International), Boyan Onyshkevych (U.S. Dept. of Defense), Marc Vilain (The MITRE Corp.), Takahiro Wakao (Univ. of Sheffield), and Ralph Weischedel (BBN Systems and Technologies).The topic of the conference was performance assessment of text analysis software systems that analyze free text in accordance with prespecified task definitions. To represent the output of analysis, the systems either insert certain types of annotations into the text or extract certain types of information from the text. Prior to the conference, systems were developed and tested on up to four different tasks: insertion of Named Entity annotations, insertion of Coreference annotations, extraction of Organization and Person information, and extraction of event information concerning corporate management changes.The conference was attended by representatives of organizations that participated in the evaluation, US Government representatives, and other invited guests. Sessions included task overviews presented by T. Wakao, C. Aone, L. Childs, and B. Sundheim; a presentation concerning MUC-6 scoring methodology by N. Chinchor; an overview of evaluation results by B. Sundheim; system demonstrations and papers on systems and test results given by the participating organizations in the evaluation; and presentations on the Coreference task and other topics by some of the evaluation participants, accompanied by discussions led by R. Grishman to critique the evaluation and make recommendations for future evaluations.Papers in this volume reflect the information presented at the conference. The introductory paper by Grishman and Sundheim provides additional background, and the paper by Vilain et al. elaborates on the Coreference scoring method, which was new for this evaluation.

Skip Table Of Content Section
SESSION: Evaluation
Article
Free
Design of the MUC-6 evaluation

The sixth in a series of "Message Understanding Conferences", which are designed to promote and evaluate research in information extraction, was held last fall. MUC-6 introduced several innovations over prior MUCs, most notably in the range of different ...

Article
Free
Overview of results of the MUC-6 evaluation

The latest in a series of natural language processing system evaluations was concluded in October 1995 and was the topic of the Sixth Message Understanding Conference (MUC-6) in November. Participants were invited to enter their systems in as many as ...

Article
Free
Four scorers and seven years ago: the scoring method for MUC-6

The MUC-6 scoring method is based on a two-step process of mapping an item generated by a system under evaluation (the "response") to the corresponding item in the human-generated answer key and then scoring the mapped items. The resulting scores are ...

Article
Free
Statistical significance of MUC-6 results

The results of the MUC-6 evaluation must be analyzed to determine whether close scores significantly distinguish systems or whether the differences in those scores are a matter of chance. In order to do such an analysis, a method of computer intensive ...

Article
Free
A model-theoretic coreference scoring scheme

This note describes a scoring scheme for the coreference task in MUC6. It improves on the original approach by: (1) grounding the scoring scheme in terms of a model; (2) producing more intuitive recall and precision scores; and (3) not requiring ...

SESSION: Systems
Article
Free
BEN: description of the PLUM system as used for MUC-6

This paper provides a quick summary of our technical approach, which has been developing since 1991 and was first fielded in MUC-3. First a quick review of what is new is provided, then a walkthrough of system components. Perhaps <u>most interesting</u> ...

Article
Free
University of Durham: description of the LOLITA system as used in MUC-6

This document describes the LOLITA system and how it was extended to run the four MUC tasks, discusses the resulting system's performance on the required "walk-through" article, and then considers the performance of this system on the final evaluation ...

Article
Free
Knight-Ridder information's value adding name finder: a variation on the theme of FASTUS

Knight-Ridder Information, Inc., participated in MUC-6 with VANF (Value Adding Name Finder), the system used by Knight-Ridder Information in production for adding a Company Names descriptor field to online newspaper and newswire databases. Knight-Ridder ...

Article
Free
Lockheed Martin: LOUELLA PARSING, an NLToolset system for MUC-6

During the 1980s. General Electric Corporate Research and Development began the design and implementation of a set of text-processing tools known as the NLToolset. This suite of tools developed, over time, in cooperation with a subgroup of the ...

Article
Free
University of Manitoba: description of the PIE system used for MUC-6

The PIE (Principar-driven Information Extraction) system takes a different approach to the problem of information extraction from the NUBA system that was used in MUC-5. The NUBA system did not have a parser and relies on an abductive reasoner to ...

Article
Free
Description of the UMass system as used for MUC-6

Information extraction research at the University of Massachusetts is based on portable, trainable language processing components. Some components are more effective than others, some have been under development longer than others, but in all cases, we ...

Article
Free
MITRE: description of the Alembic system used for MUC-6

As with several other veteran MUC participants, MITRE's Alembic system has undergone a major transformation in the past two years. The genesis of this transformation occurred during a dinner conversation at the last MUC conference, MUC-5. At that time, ...

Article
Free
CRL/NMSU: description of the CRL/NMSU systems used for MUC-6

This paper discusses the two CRL named entity recognition systems submitted for MUC-6. The systems are based on entirely different approaches. The first is a data intensive method, which uses human generated patterns. The second uses the training data ...

Article
Free
The NYU system for MUC-6 or where's the syntax?

Over the past five MUCs, New York University has clung faithfully to the idea that information extraction should begin with a phase of full syntactic analysis, followed by a semantic analysis of the syntactic structure. Because we have a good, broad-...

Article
Free
University of Pennsylvania: description of the University of Pennsylvania system used for MUC-6

Breck Baldwin and Jeff Reynar informally began the University of Pennsylvania's MUC-6 coreference effort in January of 1995. For the first few months, tools were built and the system was extended at weekly 'hack sessions.' As more people began attending ...

Article
Free
Description of the SAIC DX system as used for MUC-6

This is a very young project, operational for only a few months. The focus of the effort is data-extraction, the identification of instances of data-classes in "commercial" text -- e.g., newspapers, technical reports, business correspondence, ...

Article
Free
University of Sheffield: description of the LaSIE system as used for MUC-6

The LaSIE (Large Scale Information Extraction) system has been developed at the University of Sheffield as part of an ongoing research effort into information extraction and, more generally, natural language engineering.

Article
Free
SRA: description of the SRA system as used for MUC-6

SRA used the combination of two systems for the MUC-6 tasks: NameTag™, a commercial software product that recognizes proper names and other key phrases in text; and HASTEN, an experimental text extraction system that has been under development for only ...

Article
Free
SRI International FASTUS system: MUC-6 test results and analysis

SRI International participated in the MUC-6 evaluation using the latest version of SRI's FASTUS system [1]. The FASTUS system was originally developed for participation in the MUC-4 evaluation [3] in 1992, and the performance of FASTUS in MUC-4 helped ...

Article
Free
Sterling software: an NLToolset-based system for MUC-6

For a little over two years, Sterling Software ITD has been developing the Automatic Templating System (ATS) [1] for automatically extracting entity and event data in the counter-narcotics domain from military messages. This system, part of the Counter ...

Article
Free
Wayne State University: description of the UNO natural language processing system as used for MUC-6

The UNO natural language processing (NLP) system implements a Boolean algebra computational model of natural language [Iwańska, 1992] [Iwańska, 1993] [Iwańska, 1994] [Iwańska, 1996b] and reflects our research hypothesis that natural language is a very ...

Recommendations