Property:Wikipedia data extraction

From WikiLit
Revision as of 18:14, January 23, 2014 by Fnielsen (Talk | contribs) (clone -> Dump)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Wikipedia data extraction refers to the general means by which Wikipedia data was obtained for the purpose of the study. The options are:

  • Dump: Data dumps was downloaded (and possibly installed locally) and analyzed.
  • Live Wikipedia: Data was extracted from accessing the live Wikipedia website. This includes data extracted from history pages on the live Wikipedia, as long as a local version of Wikipedia was not reproduced to obtain the data.
  • Secondary dataset: A preprocessed dataset of Wikipedia was used to obtain the data for analysis. That is, the researchers depended on someone else's reprocessing of a Wikipedia clone.


Values

Unique values: Dump, Live Wikipedia, N/A, Secondary dataset

  Wikipedia data extraction
A knowledge-based search engine powered by Wikipedia Dump
Crossing textual and visual content in different application scenarios Dump
Facet-based opinion retrieval from blogs Dump
Clustering of scientific citations in Wikipedia Dump
A content-driven reputation system for the Wikipedia Dump
… further results

Pages using the property "Wikipedia data extraction"

Showing 25 pages using this property.

(previous 25) (next 25)

'

'Wikipedia, the free encyclopedia' as a role model? Lessons for open innovation from an exploratory examination of the supposedly democratic-anarchic nature of Wikipedia +Live Wikipedia  +

A

A 'resource review' of Wikipedia +Live Wikipedia  +
A Persian web page classifier applying a combination of content-based and context-based features +Live Wikipedia  +
A Wikipedia literature review +N/A  +
A Wikipedia matching approach to contextual advertising +Live Wikipedia  +
A comparison of World Wide Web resources for identifying medical information +Live Wikipedia  +
A comparison of privacy issues in collaborative workspaces and social networks +N/A  +
A content-driven reputation system for the Wikipedia +Dump  +
A cultural and political economy of Web 2.0 +Live Wikipedia  +
A data-driven sketch of Wikipedia editors +Live Wikipedia  +
A five-year study of on-campus Internet use by undergraduate biomedical students +Live Wikipedia  +
A framework for information quality assessment +Live Wikipedia  +
A knowledge-based search engine powered by Wikipedia +Dump  +
A multimethod study of information quality in wiki collaboration +Live Wikipedia  +
A negative category based approach for Wikipedia document classification +Dump  +
A new year, a new Internet +N/A  +
A request for help to improve the coverage of the NHS and UK healthcare issues on Wikipedia +N/A  +
A semantic approach for question classification using WordNet and Wikipedia +Live Wikipedia  +
A systemic and cognitive view on collaborative knowledge building with wikis +Live Wikipedia  +
A tale of two tasks: editing in the era of digital literacies +Live Wikipedia  +
A utility for estimating the relative contributions of wiki authors +Live Wikipedia  +
Academics and Wikipedia: reframing Web 2.0 as a disruptor of traditional academic power-knowledge arrangements +Live Wikipedia  +
Accelerating networks +Live Wikipedia  +
Access, claims and quality on the Internet - future challenges +N/A  +
Accuracy estimate and optimization techniques for SimRank computation +Live Wikipedia  +
(previous 25) (next 25)
Facts about "Wikipedia data extraction"RDF feed
Has typeThis property is a special property in this wiki.String +