Browse wiki

Jump to: navigation, search
Towards unrestricted, large-scale acquisition of feature-based conceptual representations from corpus data
Abstract In recent years a number of methods have bIn recent years a number of methods have been proposed for the automatic acquisition of feature-based conceptual representations from text corpora. Such methods could offer valuable support for theoretical research on conceptual representation. However, existing methods do not target the full range of concept-relation-feature triples occurring in human-generated norms (e.g. flute produce sound) but rather focus on concept-feature pairs (e.g. flute --- sound) or triples involving specific relations only (e.g. is-a or part-of relations). In this article we investigate the challenges that need to be met in both methodology and evaluation when moving towards the acquisition of more comprehensive conceptual representations from corpora. In particular, we investigate the usefulness of three types of knowledge in guiding the extraction process: encyclopedic, syntactic and semantic. We present first a semantic analysis of existing, human-generated feature production norms, which reveals information about co-occurring concept and feature classes. We introduce then a novel method for large-scale feature extraction which uses the class-based information to guide the acquisition process. The method involves extracting candidate triples consisting of concepts, relations and features (e.g. deer have antlers, flute produce sound) from corpus data parsed for grammatical dependencies, and re-weighting the triples on the basis of conditional probabilities calculated from our semantic analysis. We apply this method to an automatically parsed Wikipedia corpus which includes encyclopedic information and evaluate its accuracy using a number of different methods: direct evaluation against the {McRae} norms in terms of feature types and frequencies, human evaluation, and novel evaluation in terms of conceptual structure variables. Our investigation highlights a number of issues which require addressing in both methodology and evaluation when aiming to improve the accuracy of unconstrained feature extraction further. unconstrained feature extraction further.
Added by wikilit team Added on initial load  +
Collected data time dimension Cross-sectional  +
Comments We introduce then a novel method for large-scale feature extraction which uses the class-based information to guide the acquisition process.
Conclusion In particular, we investigate the usefulneIn particular, we investigate the usefulness of three types of knowledge in guiding the extraction process: encyclopedic, syntactic and semantic. We present first a semantic analysis of existing, human-generated feature production norms, which reveals information about co-occurring concept and feature classes. We introduce then a novel method for large-scale feature extraction which uses the class-based information to guide the acquisition process. The method involves extracting candidate triples consisting of concepts, relations and features (e.g. deer have antlers, flute produce sound) from corpus data parsed for grammatical dependencies, and re-weighting the triples on the basis of conditional probabilities calculated from our semantic analysis. We apply this method to an automatically parsed Wikipedia corpus which includes encyclopedic information and evaluate its accuracy using a number of different methods: direct evaluation against the McRae norms in terms of feature types and frequencies, human evaluation, and novel evaluation in terms of conceptual structure variables.Our investigation highlights a number of issues which require addressing in both methodology and evaluation when aiming to improve the accuracy of unconstrained feature extraction further. unconstrained feature extraction further.
Data source Experiment responses  + , Wikipedia pages  +
Google scholar url http://scholar.google.com/scholar?ie=UTF-8&q=%22Towards%2Bunrestricted%2C%2Blarge-scale%2Bacquisition%2Bof%2Bfeature-based%2Bconceptual%2Brepresentations%2Bfrom%2Bcorpus%2Bdata%22  +
Has author Barry Devereux + , Nicholas Pilkington + , Thierry Poibeau + , Anna Korhonen +
Has domain Computer science +
Has topic Information extraction +
Pages 137-170  +
Peer reviewed Yes  +
Publication type Journal article  +
Published in Research on Language and Computation +
Research design Experiment  +
Research questions In this article we investigate the challenIn this article we investigate the challenges that need to be met in both methodology and evaluation when moving towards the acquisition of more comprehensive conceptual representations from corpora. In particular, we investigate the usefulness of three types of knowledge in guiding the extraction process: encyclopedic, syntactic and semantic.ess: encyclopedic, syntactic and semantic.
Revid 11,006  +
Theories Undetermined
Theory type Analysis  + , Design and action  +
Title Towards unrestricted, large-scale acquisition of feature-based conceptual representations from corpus data
Unit of analysis Article  +
Url http://0-portal.acm.org.mercury.concordia.ca/citation.cfm?id=1861603.1861623&coll=DL&dl=GUIDE&CFID=112031225&CFTOKEN=18535462&preflayout=flat  +
Volume 7  +
Wikipedia coverage Other  +
Wikipedia data extraction Dump  +
Wikipedia language English  +
Wikipedia page type Article  +
Year 2009  +
Creation dateThis property is a special property in this wiki. 15 March 2012 20:31:57  +
Categories Information extraction  + , Computer science  + , Publications  +
Modification dateThis property is a special property in this wiki. 30 January 2014 20:32:00  +
hide properties that link here 
  No properties link to this page.
 

 

Enter the name of the page to start browsing from.