Property:Research questions

From WikiLit
Jump to: navigation, search

Research questions that the authors of the article have explicitly posed. Very often, this field consists of direct quotations from the article.



Pages using the property "Research questions"

Showing 25 pages using this property.

(previous 25) (next 25)

'

'Wikipedia, the free encyclopedia' as a role model? Lessons for open innovation from an exploratory examination of the supposedly democratic-anarchic nature of Wikipedia +This study aims to analyze the following claims in a realted realm: - traditional motivational theories are incapable of adequately explaining the extraordinatry engagement of these highly-dkilled developers.... - because of lack of enduring organisational structures and non-existent contracts, this realm is described as being democratic, open or even anarchic

A

A 'resource review' of Wikipedia +review of wikipedia
A Persian web page classifier applying a combination of content-based and context-based features +There are many automatic classifi cation methods and algorithms that have been propose for content-based or context-based features of web pages. In this paper we analyze these features and try to exploit a combination of features to improve categorization accuracy of Persian web page classifi cation. In this work we have suggested a linear combination of different features and adjusting the optimum weighing during application.
A Wikipedia literature review +This literature review will come in several discrete steps. First, the nature of Wikipedia must be described and catalogued, so that mathematical rigor and statistical integrity can be applied to it. Second, the history of the existing statistical technologies will be presented, for reference to the changes we apply for this problem.
A Wikipedia matching approach to contextual advertising +In this paper, we propose a method for improving the relevance of contextual ads.We propose a novel “Wikipedia matching” technique that uses Wikipedia articles as “reference points” for ads selection. We show how to combine our new method with existing solutions in order to increase the overall performance.
A comparison of World Wide Web resources for identifying medical information +In this paper, we propose a method for improving the relevance of contextual ads.We propose a novel “Wikipedia matching” technique that uses Wikipedia articles as “reference points” for ads selection. We show how to combine our newmethod with existing solutions in order to increase the overall performance.
A comparison of privacy issues in collaborative workspaces and social networks +This paper investigates whether and to what extent social networks and collaborative workspaces can be treated equally when trying to solve privacy threats, and suggests a number of potential solutions that may mitigate these issues. The scope of the analysis is relatively general, as it is not the objective to solve one particular privacy problem with one specific solution. Rather, the goal is to outline possible types of solutions that may be considered based on the particular features of collective workspaces and social network sites.
A content-driven reputation system for the Wikipedia +We present a content-driven reputation system for Wikipedia authors. In our system, authors gain reputa- tion when the edits they perform to Wikipedia articles are preserved by subsequent authors, and they lose reputation when their edits are rolled back or undone in short order. Thus, author reputation is computed solely on the basis of content evolution; user-to-user comments or ratings are not used. The author reputation we compute could be used to flag new contributions from low-reputation authors, or it could be used to allow only authors with high reputation to contribute to controversial or critical pages. A reputation system for the Wikipedia could also provide an incentive for high-quality contributions. We have implemented the proposed system, and we have used it to analyze the entire Italian and French Wikipedias, consisting of a total of 691,551 pages and 5,587,523 revi- sions.
A cultural and political economy of Web 2.0 +how? That is, how has Web 2.0 contributed to what Andrejevic (2003) calls the "surveillance economy"? How has it encouraged users to produce content for free? Where is the line between the pleasure of users and their exploitation, and how is that line made technologically and socially feasible? Moreover, what are the politics of those empty frames that sumoto.iki drew her inspiration from?
A data-driven sketch of Wikipedia editors +How do Wikipedia readers and editor spend the rest of their onlines lives?
A five-year study of on-campus Internet use by undergraduate biomedical students +In this paper we report on a large-scale study of biomedical students’ on-campus use of Internet technologies over a five-year period. The study focuses on technologies related to four key activities associated with learning and teaching: information seeking, communication, university services and information sharing.
A framework for information quality assessment +"This article proposes a general IQ assessment framework" (p. 1720), i.e. the research question could be: what would a context-independent information quality measurement framework be like?
A knowledge-based search engine powered by Wikipedia +This paper describes Koru, a new search interface that offers effective domain-independent knowledge-based information retrieval. Koru exhibits an understanding of the topics of both queries and documents. This allows it to (a) expand queries automatically and (b) help guide the user as they evolve their queries interactively. Its understanding is mined from the vast investment of manual effort and judgment that is Wikipedia. We show how this open, constantly evolving encyclopedia can yield inexpensive knowledge structures that are specifically tailored to expose the topics, terminology and semantics of individual document collections. We conducted a detailed user study with 12 participants and 10 topics from the 2005 TREC HARD track, and found that Koru and its underlying knowledge base offers significant advantages over traditional keyword search. It was capable of lending assistance to almost every query issued to it; making their entry more efficient, improving the relevance of the documents they return, and narrowing the gap between expert and novice seekers.
A multimethod study of information quality in wiki collaboration +Will the total number of contributors involve in article development be positively related to the quality of peer-produced information? Will the average number of contributions per contributor be positively related to the quality of peer-produced information? Will the overall level of content shaping be positively related to the quality Will the number of anonymous contributors on an article be negatively related to the quality Will the top contributor's depth of experience be positively related to the quality Will the top contributors s breadth of experience be negatively related to the quality of peer-produced information
A negative category based approach for Wikipedia document classification +This paper presents a profile based method for Wikipedia XML document classification. This research aims on exploiting profile-based classification. The focus of the work is on improving the profile creation thereby improving the performance of classification.
A new year, a new Internet +This month we'll look at wikis and how they are being used in corporate environments.
A request for help to improve the coverage of the NHS and UK healthcare issues on Wikipedia +It has been suggested that its coverage of the NHS and UK healthcare issues is currently poor. Therefore, a group of users have got together to create an ‘NHS wikiproject’ <http://en.wikipedia.org/wiki/Wikipedia: WikiProject_National_Health_Service> to try to improve this.
A semantic approach for question classification using WordNet and Wikipedia +In this article, we have proposed a question classification method that exploits the powerful semantic features of the WordNet and the vast knowledge repository of the Wikipedia to describe informative terms explicitly. In this article, we are extending question classification as one of the heuristics for answer validation. We are proposing a World Wide Web based solution for answer validation where answers returned by open-domain Question Answering Systems can be validated using online resources such as Wikipedia and Google
A systemic and cognitive view on collaborative knowledge building with wikis +This article presents a theoretical framework for describing how learning and collaborative knowledge building take place. In order to understand these processes, three aspects need to be considered: the social processes facilitated by a wiki, the cognitive processes of the users, and how both processes influence each other mutually.
A tale of two tasks: editing in the era of digital literacies +My purpose is not to take a position on the credibility controversy. I understand why teachers are concerned about students citing Wikipedia uncritically, and I understand why tech-savvy teens growing up in an open-source culture might chafe at those concerns. What I find most interesting about Wikipedia, though, is less about whether information on a page can be trusted and more about how the pages are constructed collaboratively online. Teachers’ justification for banning or limiting students’ use of Wikipedia (“ANYONE can contribute”) is also what makes it a powerful example of 21st-century editing
A utility for estimating the relative contributions of wiki authors +"there is a need for an attribution mechanism that would automatically record (and present) the relative contribution of each author (...) In this paper, we discuss our initial work towards addressing this concern, and introduce a wiki add-on that automatically calculates the relative contributions of wiki authors" (p. 171)
Academics and Wikipedia: reframing Web 2.0 as a disruptor of traditional academic power-knowledge arrangements +This paper draws on an empirical research project to provide better evidence about the attitude of academics towards the use of Wikipedia and by extension Web 2.0+in undergraduate education. Issues around accuracy of content form the basis of an exploration of deeper issues around form and process. The core premise of this paper is that the actual cause of any apprehension about Wikipedia lies at a deeper, epistemological level. A more critical reading of the Wikipedia phenomenon in relation to higher education suggests the real concern is about the form of Wikipedia as a new knowledge construction process, and by extension, as the iconic representative of new and uncontrollable Web 2.0+collaborative knowledge production environments. This paper is, in the end, interested in the views and experiences academics have regarding Wikipedia, primarily to ascertain how these views and experiences might influence their perceptions about, and use of, other Web 2.0+applications that appear to disrupt traditional power-knowledge arrangements.
Accelerating networks +Evolving out-of-equilibrium networks have been under intense scrutiny recently. In many real-world settings the number of links added per new node is not constant but depends on the time at which the node is introduced in the system. This simple idea gives rise to the concept of accelerating networks, for which we review an existing definition and – after finding it somewhat constrictive – offer a new definition.
Access, claims and quality on the Internet - future challenges +Hypertext, the semanticweb, Wikipedia and open source have brought many positive steps forward. This paper surveys these developments and outlines some of the challenges that lie ahead.
Accuracy estimate and optimization techniques for SimRank computation +The measure of similarity between objects is a very useful tool in many areas of computer science, including informa- tion retrieval. SimRank is a simple and intuitive measure of this kind, based on graph-theoretic model. In this paper we present a technique to estimate the ac- curacy of computing SimRank iteratively. This technique provides a way to find out the number of iterations required to achieve a desired accuracy when computing SimRank. We also present optimization techniques that improve the com- putational complexity of the iterative algorithm from O(n4) to O(n3) in the worst case. We also introduce a threshold sieving heuristic and its accuracy estimation that further improves the eciency of the method.
(previous 25) (next 25)
Facts about "Research questions"RDF feed
Has typeThis property is a special property in this wiki.Text +