Difference between revisions of "Measuring article quality in Wikipedia: models and evaluation"

From WikiLit
Jump to: navigation, search
(WP page type editted)
m (Text replace - "Computational estimation of reliability" to "Computational estimation of trustworthiness")
Line 16: Line 16:
 
|doi=10.1145/1321440.1321476
 
|doi=10.1145/1321440.1321476
 
|gscites=13037435317512336399
 
|gscites=13037435317512336399
|topics=Computational estimation of reliability
+
|topics=Computational estimation of trustworthiness
 
|domains=Computer science
 
|domains=Computer science
 
|research_questions=In this paper, we investigate the problem of assessing the quality of articles in collaborative authoring of Wikipedia. We propose three article quality measurement models that make use of the interaction data between articles and their contributors derived from the article edit history.
 
|research_questions=In this paper, we investigate the problem of assessing the quality of articles in collaborative authoring of Wikipedia. We propose three article quality measurement models that make use of the interaction data between articles and their contributors derived from the article edit history.

Revision as of 20:29, March 25, 2013

Publication (help)
Measuring article quality in Wikipedia: models and evaluation
Authors: Meiqun Hu, Ee-Peng Lim, Aixin Sun, Hady Wirawan Lauw, Ba-Quy Vuong [edit item]
Citation: CIKM '07 Proceedings of the sixteenth ACM conference on Conference on information and knowledge management  : 243-252. 2007 November 6-9. Lisboa, Portugal. Association for Computing Machinery.
Publication type: Conference paper
Peer-reviewed: Yes
Database(s):
DOI: 10.1145/1321440.1321476.
Google Scholar cites: Citations
Link(s): Paper link
Added by Wikilit team: Added on initial load
Search
Article: Google Scholar BASE PubMed
Other scholarly wikis: AcaWiki Brede Wiki WikiPapers
Web search: Bing Google Yahoo!Google PDF
Other:
Services
Format: BibTeX
Measuring article quality in Wikipedia: models and evaluation is a publication by Meiqun Hu, Ee-Peng Lim, Aixin Sun, Hady Wirawan Lauw, Ba-Quy Vuong.


[edit] Abstract

Wikipedia has grown to be the world largest and busiest free encyclopedia, in which articles are collaboratively written and maintained by volunteers online. Despite its success as a means of knowledge sharing and collaboration, the public has never stopped criticizing the quality of Wikipedia articles edited by non-experts and inexperienced contributors. In this paper, we investigate the problem of assessing the quality of articles in collaborative authoring of Wikipedia. We propose three article quality measurement models that make use of the interaction data between articles and their contributors derived from the article edit history. Our basic model is designed based on the mutual dependency between article quality and their author authority. The {PeerReview} model introduces the review behavior into measuring article quality. Finally, our {ProbReview} models extend {PeerReview} with partial reviewership of contributors as they edit various portions of the articles. We conduct experiments on a set of well-labeled Wikipedia articles to evaluate the effectiveness of our quality measurement models in resembling human judgement. Copyright 2007 {ACM.}

[edit] Research questions

"In this paper, we investigate the problem of assessing the quality of articles in collaborative authoring of Wikipedia. We propose three article quality measurement models that make use of the interaction data between articles and their contributors derived from the article edit history."

Research details

Topics: Computational estimation of trustworthiness [edit item]
Domains: Computer science [edit item]
Theory type: Design and action [edit item]
Wikipedia coverage: Main topic [edit item]
Theories: "Undetermined" [edit item]
Research design: Experiment [edit item]
Data source: [edit item]
Collected data time dimension: Cross-sectional [edit item]
Unit of analysis: Article [edit item]
Wikipedia data extraction: Live Wikipedia [edit item]
Wikipedia page type: Article, History, Discussion and Q&A [edit item]
Wikipedia language: English [edit item]

[edit] Conclusion

"In this paper, we study models for automatically deriving Wikipedia article quality rankings based on the interaction data between articles and their contributors. Our PeerReview model, which was first proposed in [17], had already shown promising performance over the baseline model Na¨ıve. We further extended it to emulate the probability of article content being reviewed by each contributor. As shown in our experiments, the extended ProbReview models with review probability decaying schemes were the best performers compared with all other models under the same setting. By observing that, user interaction data itself is not sufficient in judging article quality and article length appears to have some merits in identifying quality articles, we incorporated article length 251into article quality measurement. Our experimental results showed some performance improvement by Hybrid Basic and hybrid PeerReview models at γ = 0.1 and γ = 0.2 respectively. However, ProbReview models, did not benefit from article length."

[edit] Comments

"The models were able to properly assess the quality of articles on Wikipedia"


Further notes[edit]

Facts about "Measuring article quality in Wikipedia: models and evaluation"RDF feed
AbstractWikipedia has grown to be the world largesWikipedia has grown to be the world largest and busiest free encyclopedia, in which articles are collaboratively written and maintained by volunteers online. Despite its success as a means of knowledge sharing and collaboration, the public has never stopped criticizing the quality of Wikipedia articles edited by non-experts and inexperienced contributors. In this paper, we investigate the problem of assessing the quality of articles in collaborative authoring of Wikipedia. We propose three article quality measurement models that make use of the interaction data between articles and their contributors derived from the article edit history. Our basic model is designed based on the mutual dependency between article quality and their author authority. The {PeerReview} model introduces the review behavior into measuring article quality. Finally, our {ProbReview} models extend {PeerReview} with partial reviewership of contributors as they edit various portions of the articles. We conduct experiments on a set of well-labeled Wikipedia articles to evaluate the effectiveness of our quality measurement models in resembling human judgement. Copyright 2007 {ACM.}ing human judgement. Copyright 2007 {ACM.}
Added by wikilit teamAdded on initial load +
Collected data time dimensionCross-sectional +
CommentsThe models were able to properly assess the quality of articles on Wikipedia
ConclusionIn this paper, we study models for automatIn this paper, we study models for automatically deriving

Wikipedia article quality rankings based on the interaction data between articles and their contributors. Our PeerReview model, which was first proposed in [17], had already shown promising performance over the baseline model Na¨ıve. We further extended it to emulate the probability of article content being reviewed by each contributor. As shown in our experiments, the extended ProbReview models with review probability decaying schemes were the best performers compared with all other models under the same setting. By observing that, user interaction data itself is not sufficient in judging article quality and article length appears to have some merits in identifying quality articles, we incorporated article length 251into article quality measurement. Our experimental results showed some performance improvement by Hybrid Basic and hybrid PeerReview models at γ = 0.1 and γ = 0.2 respectively. However, ProbReview models, did not benefit from article length.odels, did not benefit

from article length.
Conference locationLisboa, Portugal +
Dates6-9 +
Doi10.1145/1321440.1321476 +
Google scholar urlhttp://scholar.google.com/scholar?ie=UTF-8&q=%22Measuring%2Barticle%2Bquality%2Bin%2BWikipedia%3A%2Bmodels%2Band%2Bevaluation%22 +
Has authorMeiqun Hu +, Ee-Peng Lim +, Aixin Sun +, Hady Wirawan Lauw + and Ba-Quy Vuong +
Has domainComputer science +
Has topicComputational estimation of trustworthiness +
MonthNovember +
Pages243-252 +
Peer reviewedYes +
Publication typeConference paper +
Published inCIKM '07 Proceedings of the sixteenth ACM conference on Conference on information and knowledge management +
PublisherAssociation for Computing Machinery +
Research designExperiment +
Research questionsIn this paper, we investigate the problem In this paper, we investigate the problem of assessing the quality of articles in collaborative authoring of Wikipedia. We propose three article quality measurement models that make use of the interaction data between articles and their contributors derived from the article edit history.ors derived from the article edit history.
Revid8,959 +
TheoriesUndetermined
Theory typeDesign and action +
TitleMeasuring article quality in Wikipedia: models and evaluation
Unit of analysisArticle +
Urlhttp://dl.acm.org/citation.cfm?id=1321476 +
Wikipedia coverageMain topic +
Wikipedia data extractionLive Wikipedia +
Wikipedia languageEnglish +
Wikipedia page typeArticle +, History + and Discussion and Q&A +
Year2007 +