Wisdom of crowds versus wisdom of linguists - measuring the semantic relatedness of words

From WikiLit
Jump to: navigation, search
Publication (help)
Wisdom of crowds versus wisdom of linguists - measuring the semantic relatedness of words
Authors: Torsten Zesch, Iryna Gurevych [edit item]
Citation: Natural Language Engineering 16 (1): 25. 2009.
Publication type: Journal article
Peer-reviewed: Yes
Database(s):
DOI: 10.1017/S1351324909990167.
Google Scholar cites: Citations
Link(s): Paper link
Added by Wikilit team: Added on initial load
Search
Article: Google Scholar BASE PubMed
Other scholarly wikis: AcaWiki Brede Wiki WikiPapers
Web search: Bing Google Yahoo!Google PDF
Other:
Services
Format: BibTeX
Wisdom of crowds versus wisdom of linguists - measuring the semantic relatedness of words is a publication by Torsten Zesch, Iryna Gurevych.


[edit] Abstract

In this article, we present a comprehensive study aimed at computing semantic relatedness of word pairs. We analyze the performance of a large number of semantic relatedness measures proposed in the literature with respect to different experimental conditions, such as (i) the datasets employed, (ii) the language (English or German), (iii) the underlying knowledge source, and (iv) the evaluation task (computing scores of semantic relatedness, ranking word pairs, solving word choice problems). To our knowledge, this study is the first to systematically analyze semantic relatedness on a large number of datasets with different properties, while emphasizing the role of the knowledge source compiled either by the ‘wisdom of linguists’ (i.e., classical wordnets) or by the ‘wisdom of crowds’ (i.e., collaboratively constructed knowledge sources like Wikipedia). The article discusses benefits and drawbacks of different approaches to evaluating semantic relatedness. We show that results should be interpreted carefully to evaluate particular aspects of semantic relatedness. For the first time, we employ a vector based measure of semantic relatedness, relying on a concept space built from documents, to the first paragraph of Wikipedia articles, to English WordNet glosses, and to GermaNet based pseudo glosses. Contrary to previous research (Strube and Ponzetto 2006; Gabrilovich and Markovitch 2007; Zesch et al. 2007), we find that ‘wisdom of crowds’ based resources are not superior to ‘wisdom of linguists’ based resources. We also find that using the first paragraph of a Wikipedia article as opposed to the whole article leads to better precision, but decreases recall. Finally, we present two systems that were developed to aid the experiments presented herein and are freely available1 for research purposes: (i) DEXTRACT, a software to semi-automatically construct corpus-driven semantic relatedness datasets, and (ii) JWPL, a Java-based high-performance Wikipedia Application Programming Interface (API) for building natural language processing (NLP) applications.

[edit] Research questions

"In this article, we present a comprehensive study aimed at computing semantic relatedness of word pairs. We analyze the performance of a large number of semantic relatedness measures proposed in the literature with respect to different experimental conditions, such as (i) the datasets employed, (ii) the language (English or German), (iii) the underlying knowledge source, and (iv) the evaluation task (computing scores of semantic relatedness, ranking word pairs, solving word choice problems). To our knowledge, this study is the first to systematically analyze semantic relatedness on a large number of datasets with different properties, while emphasizing the role of the knowledge source compiled either by the ‘wisdom of linguists’ (i.e., classical wordnets) or by the ‘wisdom of crowds’ (i.e., collaboratively constructed knowledge sources like Wikipedia). The article discusses benefits and drawbacks of different approaches to evaluating semantic relatedness."

Research details

Topics: Semantic relatedness [edit item]
Domains: Computer science [edit item]
Theory type: Design and action [edit item]
Wikipedia coverage: Sample data [edit item]
Theories: "Undetermined" [edit item]
Research design: Experiment [edit item]
Data source: Experiment responses, Archival records, Wikipedia pages [edit item]
Collected data time dimension: Cross-sectional [edit item]
Unit of analysis: Article [edit item]
Wikipedia data extraction: Dump [edit item]
Wikipedia page type: Article [edit item]
Wikipedia language: English [edit item]

[edit] Conclusion

"Correlation with human judgments Contrary to previous research (Strube and Ponzetto 2006; Gabrilovich and Markovitch 2007; Zesch et al. 2007), we find that (i) ‘wisdom of crowds’ based resources are not generally superior to ‘wisdom of linguists’ based resources. We further find that (ii) concept vector based measures consistently display superior performance compared to other measure types, and (iii) that the results on German datasets confirm the results for English. The restored competitiveness of ‘wisdom of linguists’ based resources is due to a generalized concept vector based measure (ZG07) introduced in this article. This measure is applicable to any knowledge source offering a textual representation of a concept. We showed how such textual representations can be inferred from semantic relations in wordnets without glosses. The performance gains that can be obtained with the generalized concept vector based measure strongly depend on the amount of additional information that the knowledge source offers in the textual representations. Solving word choice problems As this task depends much on the coverage of a knowledge source, results are different for English and German. On the English dataset, we find (i) little differences between ‘wisdom of linguists’ or ‘wisdom of crowds’ knowledge sources. On the German dataset the ‘crowds’ outperform the ‘linguists’ by a wide margin due to the much higher coverage of the SemRel measures using the German Wikipedia. We find that (ii) concept vector based measures using Wikipedia as a knowledge source perform consistently well, and outperform all other measure types with respect to accuracy and coverage on the English as well as the German dataset. However, a more detailed analysis of the word choice datasets with respect to the expected difficulty for a SemRel measure is necessary before we can draw final conclusions."

[edit] Comments


Further notes[edit]