Towards unrestricted, large-scale acquisition of feature-based conceptual representations from corpus data
Publication (help) | |
---|---|
Towards unrestricted, large-scale acquisition of feature-based conceptual representations from corpus data | |
Authors: | Barry Devereux, Nicholas Pilkington, Thierry Poibeau, Anna Korhonen [edit item] |
Citation: | Research on Language and Computation 7 : 137-170. 2009. |
Publication type: | Journal article |
Peer-reviewed: | Yes |
Database(s): | |
DOI: | Define doi. |
Google Scholar cites: | Citations |
Link(s): | Paper link |
Added by Wikilit team: | Added on initial load |
Search | |
Article: | Google Scholar BASE PubMed |
Other scholarly wikis: | AcaWiki Brede Wiki WikiPapers |
Web search: | Bing Google Yahoo! — Google PDF |
Other: | |
Services | |
Format: | BibTeX |
Contents
[edit] Abstract
In recent years a number of methods have been proposed for the automatic acquisition of feature-based conceptual representations from text corpora. Such methods could offer valuable support for theoretical research on conceptual representation. However, existing methods do not target the full range of concept-relation-feature triples occurring in human-generated norms (e.g. flute produce sound) but rather focus on concept-feature pairs (e.g. flute --- sound) or triples involving specific relations only (e.g. is-a or part-of relations). In this article we investigate the challenges that need to be met in both methodology and evaluation when moving towards the acquisition of more comprehensive conceptual representations from corpora. In particular, we investigate the usefulness of three types of knowledge in guiding the extraction process: encyclopedic, syntactic and semantic. We present first a semantic analysis of existing, human-generated feature production norms, which reveals information about co-occurring concept and feature classes. We introduce then a novel method for large-scale feature extraction which uses the class-based information to guide the acquisition process. The method involves extracting candidate triples consisting of concepts, relations and features (e.g. deer have antlers, flute produce sound) from corpus data parsed for grammatical dependencies, and re-weighting the triples on the basis of conditional probabilities calculated from our semantic analysis. We apply this method to an automatically parsed Wikipedia corpus which includes encyclopedic information and evaluate its accuracy using a number of different methods: direct evaluation against the {McRae} norms in terms of feature types and frequencies, human evaluation, and novel evaluation in terms of conceptual structure variables. Our investigation highlights a number of issues which require addressing in both methodology and evaluation when aiming to improve the accuracy of unconstrained feature extraction further.
[edit] Research questions
"In this article we investigate the challenges that need to be met in both methodology and evaluation when moving towards the acquisition of more comprehensive conceptual representations from corpora. In particular, we investigate the usefulness of three types of knowledge in guiding the extraction process: encyclopedic, syntactic and semantic."
Research details
Topics: | Information extraction [edit item] |
Domains: | Computer science [edit item] |
Theory type: | Analysis, Design and action [edit item] |
Wikipedia coverage: | Other [edit item] |
Theories: | "Undetermined" [edit item] |
Research design: | Experiment [edit item] |
Data source: | [edit item] |
Collected data time dimension: | Cross-sectional [edit item] |
Unit of analysis: | Article [edit item] |
Wikipedia data extraction: | Clone [edit item] |
Wikipedia page type: | Article [edit item] |
Wikipedia language: | English [edit item] |
[edit] Conclusion
"In particular, we investigate the usefulness of three types of knowledge in guiding the extraction process: encyclopedic, syntactic and semantic. We present first a semantic analysis of existing, human-generated feature production norms, which reveals information about co-occurring concept and feature classes. We introduce then a novel method for large-scale feature extraction which uses the class-based information to guide the acquisition process. The method involves extracting candidate triples consisting of concepts, relations and features (e.g. deer have antlers, flute produce sound) from corpus data parsed for grammatical dependencies, and re-weighting the triples on the basis of conditional probabilities calculated from our semantic analysis. We apply this method to an automatically parsed Wikipedia corpus which includes encyclopedic information and evaluate its accuracy using a number of different methods: direct evaluation against the McRae norms in terms of feature types and frequencies, human evaluation, and novel evaluation in terms of conceptual structure variables.Our investigation highlights a number of issues which require addressing in both methodology and evaluation when aiming to improve the accuracy of unconstrained feature extraction further."
[edit] Comments
"We introduce then a novel method for large-scale feature extraction which uses the class-based information to guide the acquisition process."
Further notes[edit]
Abstract | In recent years a number of methods have b … In recent years a number of methods have been proposed for the automatic acquisition of feature-based conceptual representations from text corpora. Such methods could offer valuable support for theoretical research on conceptual representation. However, existing methods do not target the full range of concept-relation-feature triples occurring in human-generated norms (e.g. flute produce sound) but rather focus on concept-feature pairs (e.g. flute --- sound) or triples involving specific relations only (e.g. is-a or part-of relations). In this article we investigate the challenges that need to be met in both methodology and evaluation when moving towards the acquisition of more comprehensive conceptual representations from corpora. In particular, we investigate the usefulness of three types of knowledge in guiding the extraction process: encyclopedic, syntactic and semantic. We present first a semantic analysis of existing, human-generated feature production norms, which reveals information about co-occurring concept and feature classes. We introduce then a novel method for large-scale feature extraction which uses the class-based information to guide the acquisition process. The method involves extracting candidate triples consisting of concepts, relations and features (e.g. deer have antlers, flute produce sound) from corpus data parsed for grammatical dependencies, and re-weighting the triples on the basis of conditional probabilities calculated from our semantic analysis. We apply this method to an automatically parsed Wikipedia corpus which includes encyclopedic information and evaluate its accuracy using a number of different methods: direct evaluation against the {McRae} norms in terms of feature types and frequencies, human evaluation, and novel evaluation in terms of conceptual structure variables. Our investigation highlights a number of issues which require addressing in both methodology and evaluation when aiming to improve the accuracy of unconstrained feature extraction further. unconstrained feature extraction further. |
Added by wikilit team | Added on initial load + |
Collected data time dimension | Cross-sectional + |
Comments | We introduce then a novel method
for large-scale feature extraction which uses the class-based information to guide the acquisition process. |
Conclusion | In particular,
we investigate the usefulne … In particular, we investigate the usefulness of three types of knowledge in guiding the extraction process: encyclopedic, syntactic and semantic. We present first a semantic analysis of existing, human-generated feature production norms, which reveals information about co-occurring concept and feature classes. We introduce then a novel method for large-scale feature extraction which uses the class-based information to guide the acquisition process. The method involves extracting candidate triples consisting of concepts, relations and features (e.g. deer have antlers, flute produce sound) from corpus data parsed for grammatical dependencies, and re-weighting the triples on the basis of conditional probabilities calculated from our semantic analysis. We apply this method to an automatically parsed Wikipedia corpus which includes encyclopedic information and evaluate its accuracy using a number of different methods: direct evaluation against the McRae norms in terms of feature types and frequencies, human evaluation, and novel evaluation in terms of conceptual structure variables.Our investigation highlights a number of issues which require addressing in both methodology and evaluation when aiming to improve the accuracy of unconstrained feature extraction further. unconstrained feature extraction further. |
Google scholar url | http://scholar.google.com/scholar?ie=UTF-8&q=%22Towards%2Bunrestricted%2C%2Blarge-scale%2Bacquisition%2Bof%2Bfeature-based%2Bconceptual%2Brepresentations%2Bfrom%2Bcorpus%2Bdata%22 + |
Has author | Barry Devereux +, Nicholas Pilkington +, Thierry Poibeau + and Anna Korhonen + |
Has domain | Computer science + |
Has topic | Information extraction + |
Pages | 137-170 + |
Peer reviewed | Yes + |
Publication type | Journal article + |
Published in | Research on Language and Computation + |
Research design | Experiment + |
Research questions | In this article we investigate the challen … In this article we investigate the challenges that need to be met in both methodology and evaluation when moving towards the acquisition of more comprehensive conceptual representations from corpora. In particular, we investigate the usefulness of three types of knowledge in guiding the extraction process: encyclopedic, syntactic and semantic.ess: encyclopedic, syntactic and semantic. |
Revid | 9,917 + |
Theories | Undetermined |
Theory type | Analysis + and Design and action + |
Title | Towards unrestricted, large-scale acquisition of feature-based conceptual representations from corpus data |
Unit of analysis | Article + |
Url | http://0-portal.acm.org.mercury.concordia.ca/citation.cfm?id=1861603.1861623&coll=DL&dl=GUIDE&CFID=112031225&CFTOKEN=18535462&preflayout=flat + |
Volume | 7 + |
Wikipedia coverage | Other + |
Wikipedia data extraction | Clone + |
Wikipedia language | English + |
Wikipedia page type | Article + |
Year | 2009 + |