Last modified on March 15, 2012, at 03:10

Property:Conclusion

Conclusions that the authors of the article have drawn from their study. Very often, this field consists of direct quotations from the article.



Pages using the property "Conclusion"

Showing 25 pages using this property.

(previous 25) (next 25)

A

Addressing gaps in knowledge while reading +The prototype tool, LiteraryMark, developed for this study employs a less intrusive approach to retrieving related information. The user simply highlights text in the passage and a relevant Wikipedia article is displayed in a pop-up box. In this study, we examined six algorithms exploiting the language of the abstract, as well as links and categories of the Wikipedia articles, as the context to narrow the results to one relevant Wikipedia article. The most effective algorithm, using the terms of the abstract alone, was successful over 70% of the cases in a user study.
Adhocratic governance in the Internet age: a case of Wikipedia +The English-language Wikipedia shows many signs of being an adhocracy—one closely connected to open-source development models found in the FOSS movement. Editors at Wikipedia share the adhocratic values of flat hierarchy, decentralization, little managerial control, and ad-hoc creation of informal multidisciplinary teams. Like individuals throughout most of the FOSS movement, they are highly motivated—not by potential financial gain, but by their project's ideology. In traditional adhocracies, individuals are bound by rules that cannot be altered; at Wikipedia, by contrast, there is no rule that cannot be altered if the community so desires. In Wikipedia's adhocracy, the editors not only “capture opportunities,” but they also can create those opportunities, since editors can change all policies and so enjoy an unprecedented degree of empowerment.
An Aesthetic for Deliberating Online: Thinking Through “Universal Pragmatics” and “Dialogism” with Reference to Wikipedia +First, we argue that the theories of Bakhtin and Habermas give us a sophisticated and politically engaged theoretic vocabulary that allows us to think through argumentation and persuasion online. However, our normative thinking about the Internet, and its perceived emancipatory potential, must be firmly embedded in the study of existing debate. To judge the emancipatory potential of the Internet and, indeed, the potential for agreement on issues of public life, it is necessary to study actual debate and discussion over a sustained period of time. Second, we show that, revised and supplemented, with specific reference to empirical investigation, elements of both Bakhtinian and Habermasian theory provide powerful tools for understanding debate and discussion online. From these thinkers we take a view of language use not only as vehicles for communicating beliefs and ideas, but also as intimately connected to political struggles, in that language reflects and impacts on particular contexts. In addition, we take a view of linguistic meaning and knowledge, both in virtual communities and in the “real world,” as socially constructed and shaped through interaction—rather than straightforwardly dictated from above, inherently conflictual and fated to disagree, or imposed by a will to power. Therefore, it seems at least theoretically possible that disparate social groups could reach a genuine consensus on issues of public life, and it is not the case that typically subjugated voices are bound to their positions of subservience. The potential for agreement despite social difference can be seen, for example, in the earlier discussion, in the albeit fleeting and transitory moments of consensus reached on the topics of stem cells and transhumanism. Third, our analysis shows that critical thinking on the Internet should be open to the complexity and ambiguity of deliberative relations and the often “irrational” ways in which the “truth” or socially meaningful knowledge emerges in online environments.
An activity theoretic model for information quality change +In this study we analyzed time series data on the edit processes of FAs and FFAs. Although the time series data exhibited different trajectories for different articles, we observed a number of stable patterns in the trajectories. The patterns appeared to follow the life cycles of the underlying entities. An analysis of FAR and FARC discussions on FFAs showed that IQ could be changed not only actively by editors, malicious agents, or IQA agents editing the article, but also passively by changes in the article’s underlying entity or the context of its evaluation and use. The IQ of the majority of FFAs had been re–evaluated as lower, and these FFAs lost their high–quality status after the community decided to increase IQ requirements. We believe that this study of the patterns of IQ processes and the sources of IQ variance in Wikipedia can contribute to a better understanding of IQ dynamics, and that it has useful implications for optimizing IQ assurance in traditional databases. In particular, the activity theoretic model of IQ change and information type specific edit process patterns identified in this study can serve as a reusable knowledge resource for predicting IQ changes and guiding IQ maintenance actions and resource allocation. The model can also inform the design of software architecture and tools for automatic IQ assurance. Future work will include investigating the cost structure of IQ and linking it to IQ decision–making.
An analysis of Wikipedia +Our main result in this paper was an explanation for the size of Wikipedia based on equilibrium contributions depending on the differences in types. Free-editing allows for a variety of expressions; expressions that reflect differences in type. In addition, using well-grounded principles from information economics, we explained why Wikipedia.s commercial counterpart could be much smaller in size. Our results were important as we are able to establish both lower and upper bounds for the reliability of Wikipedia. Qualitatively, Wikipedia.s definition as a public good, combined with free-riding and free-editing helps to maintain the reliability of Wikipedia. Our findings have implications for the much-debated topic of credibility in the new Collaborative Net environment. We also highlight the uniqueness of Wikipedia when compared to (other) Open Source Systems.
An analysis of the delayed response to hurricane Katrina through the lens of knowledge management +The findings reveal that the delay causes were inter-related and were largely traceable to the lapses in the KM processes within and across agencies involved in Katrina. Three main KM implications for disaster management are as follows. First is the high cost of the knowledge-doing chasm in the knowledge creation process. Capturing and analysing disaster data cannot be mere cognitive activities confined within the intellectual sphere, but they must be dovetailed with tangible, follow-up actions. Next, to facilitate knowledge transfer process particularly in large-scale disaster that demands the involvement of multiple agencies, it is imperative to establish a unified command structure. Confusion and chaos often stem from not having a clear chain of command. Third, the precursor to knowledge reuse in disaster management is an accurate assessment of a disaster severity. The underestimation of Katrina’s impact was the start of a series of mistakes in managing the incident.
An analysis of topical coverage of Wikipedia +Overall, we found that the degree to which Wikipedia is lacking depends heavily on one’s perspective. Even in the least covered areas, because of its sheer size, Wikipedia does well, but since a collection that is meant to represent general knowledge is likely to be judged by the areas in which it is weakest, it is important to identify these areas and determine why they are not more fully elaborated. It cannot be a coincidence that two areas that are particularly lacking on Wikipedia—law and medicine—are also the purview of licensed experts. Many attorneys have taken up blogging with open arms and medical research is now frequently published in open access journals, both suggesting that there is not always an impediment to these groups contributing to online resources. Despite the noted difficulties of partitioning Wikipedia into topical domains, the sheer number of articles presented by Wikipedia far outstrips the bound encyclopedias we investigated. Can you have too much of a good thing? There may be some question as to whether an article on ‘‘Finnish Profanity’’ rises to the same level of importance as ‘‘Finnish Grammar’’—someone seeking out the most important topics in any sub-domain of human knowledge might have difficulty finding them in Wikipedia. But assuming the most important topics are covered well, there is no reason that other topics that may be considered somewhat more marginal should not also be available. At present, several projects are underway to ensure that important topics receive appropriate coverage. WikiProject Physics, for example, has several dozen participants who are actively contributing to the breadth, quality, and organization of physics-related articles on Wikipedia. The project maintains a list of missing and inadequate articles, as well as a list of articles awaiting expert review. Several of the orphan articles located by our comparison were actually listed on various ‘‘missing topics’’ pages, indicating that if this study were replicated in the future, the correlation between the printed encyclopedias and Wikipedia would increase. Both approaches taken here provide some indication of the kinds of topics that Wikipedia emphasizes. We have provided some initial observations as to why these differences exist, but there is still much to be done in this regard. Wikipedia remains a surprise in many ways, in part because it is difficult to gauge the motivations of its contributors. By understanding why and how people contribute to Wikipedia, particularly within various knowledge sub-domains, we may be able to encourage work in areas that are, relatively speaking, in need of more contributions.
An axiomatic approach for result diversification +This work presents an approach to characterizing diversication systems using a set of natural axioms and an empirical analysis that qualitatively compares the choice of axioms, relevance and distance functions using the well-known measures of novelty and relevance. The choice of axioms presents a clean way of characterizing objectives independent of the algorithms used for the objective, and the speci c forms of the distance and relevance functions. Speci cally, we illustrate the use of the axiomatic framework by studying three objectives satisfying di erent subsets of axioms. The empirical analysis on the other hand, while being dependent on these parameters, has the advantage of being able to quantify the trade-o s between novelty and relevance in the diversi cation objective. In this regard, we explore two applications of web search and product search, each with di erent notions of relevance and distance. In each application, we compare the performance of the three objectives by measuring the trade-o in novelty and relevance.
An empirical examination of Wikipedia's credibility +No difference was found between the two group in terms of their perceived credibility of Wikipedia or of the articles’ authors, but a difference was found in the credibility of the articles — the experts found Wikipedia’s articles to be more credible than the non–experts. This suggests that the accuracy of Wikipedia is high. However, the results should not be seen as support for Wikipedia as a totally reliable resource as, according to the experts, 13 percent of the articles contain mistakes.
An empirical study of the effects of NLP components on geographic IR performance +Table 7 shows the MAP scores for these runs on the annotated collections, using the title and description fields in the GeoCLEF 2005 and 2006 queries. The most impressive result from this table, is the consistent effectiveness of the query normalisation strategy. On all annotated collections and GeoCLEF queries (2005/2006) the average MAP scores of the geo run exceed the corresponding geo_nonorm scores. In addition, the MAP score range of the geo only varies between 0.32 and 0.36 (with the exception of disambiguation accuracy of 0%), which shows how effective the normalisation strategy is at reducing the negative effects of the NLP errors on retrieval performance. The MAP scores in Table 7 also provide us with a means of analysing the effect of NLP errors on GIR performance. However, since the geo and geo_nonorm runs both mitigate NLP errors by including location text terms in their queries, will focus the rest of this discussion on the results of geo_notxt run as it is more sensitive to these errors. Hence, we can conclude that low NERC recall has a greater impact on retrieval effectiveness than low NERC precision does. However, the most significant finding of all our experiments is that a baseline IR system run on nonannotated data performs nearly as well as our top performing Geo run (OpenNLP) on the GeoCLEF 2005 and 2006 topics.
An evaluation of medical knowledge contained in Wikipedia and its use in the LOINC database +We conclude that Wikipedia contains a surprisingly large amount of scientific and medical data and could effectively be used as an initial knowledge base for specific medical informatics and research projects. The software we developed to automate the matching of LOINC part names to Wikipedia articles performed satisfactorily with high sensitivity and moderate specificity. The current release of RELMA and LOINC include descriptions of LOINC parts obtained from Wikipedia as a direct result of this project.
An exploration on on-line mass collaboration: focusing on its motivation structure +This research, therefore, 1) developed a typology of “on-line cooperation”, according to its goal or result, 2) explored each individual’s incentive in three dimensions and its realizable combination, 3) and observed what the dominant individuals’ incentive for each type of cooperation are. Cooperation has been categorized into parochial cooperation, active cooperation, market-alternative cooperation and unintended cooperation, by on the one hand how specific expected beneficiaries are and whether it has external effect, on the other hand. To further analyze individuals’ incentive, we adopted Benkler’s three dimensional framework—monetary rewards, intrinsic/hedonic rewards, and social-psychological rewards—and explored its meaning and validity. Monetary rewards have been interpreted as a measure of altruistic/egoistic behavior, while Intrinsic/Hedonic rewards is used for dividing hard-core cooperation and soft-core cooperation. Psychological rewards, which I argue most substantial, are the indispensable point to make a web-based collaboration.
An inside view: credibility in Wikipedia from the perspective of editors +Results. The participants use Wikipedia for purposes where it is not vital that the information is correct. Their credibility assessments are mainly based on authorship, verifiability, and the editing history of an article. Conclusions. The situations and purposes for which the editors use Wikipedia are similar to other user groups, but they draw on their knowledge as members of the network of practice of wikipedians to make credibility assessments, including knowledge of certain editors and of the MediaWiki architecture. Their assessments have more similarities to those used in traditional media than to assessments springing from the wisdom of crowds.
Analyzing and visualizing the semantic coverage of Wikipedia and its authors +This article presented, to our knowledge, the first semantic map of the English Wikipedia data. The map shows that when co-occurrence of categories within articles is considered as a measure of category similarity, categories cluster naturally revealing the content coverage of Wikipedia. The map also shows that the category structure is well maintained, although the bots and users involved in its maintenance have varied scope and intentions.
Analyzing the creative editing behavior of Wikipedia editors: through dynamic social network analysis +While coolfarmers are the backbone of Wikipedia, recognizing and adequately responding to egoboosters poses major ethical challenges. Leaving egoboosters unpunished degrades the quality of Wikipedia, thus also doing a huge disservice to the tireless and immensely valuable work of the coolfarmers. We therefore think that identifying the patterns of coolfarmers and egoboosters will bring a big benefit to Wikipedia. Debunking the egoboosters takes a lot of moral authority, and who better to apply that moral authority by removing and/or reprimanding egoboosters than the coolfarmers. Finding suitable and hard-to-spam metrics for identifying the most valuable contributors to Wikipedia has direct practical applicability beyond finding the egobooster, by e.g. proposing alternate ranking systems for the quality of articles based on the quality of contributors.
Applications of semantic web methodologies and techniques to social networks and social websites +In this paper, we have described the significance of community-oriented and contentsharing sites on the Web, the shortcomings of many of these sites as they are now, and the benefits that semantic technologies can bring to social networks and social websites. Online social spaces encouraging content creation and sharing have resulted in the formation of massive and intricate networks of people and associated content. However the lack of integration between sites means that these networks are disjoint and users are unable to reuse data across sites. Semantic Web technologies can solve some of these issues and improve the value and functionality of online social spaces. The process of creating and using semantic data in the Social Web can be viewed as a sort of food chain of producers, collectors and consumers. Semantic data producers publish information in structured, common formats, such that it can be easily integrated with data from other diverse sources. Collectors, if necessary, aggregate and consolidate heterogeneous data from other diverse sources. Consumers may use this data for analysis or in end-user applications.
Are web-based informational queries changing? +The survey results suggest that a large group of queries, which in the past, would have been classified as informational have become at least partially navigational. We contend that this change has occurred because of the rise of large web sites holding particular types of information, such as Wikipedia and IMDB. The questionnaire respondents’ stated that Wikipedia and IMDB were convenient and held a sufficient level of information to address their information need. It would appear that to these users, the sites were expected to be accessible. Based on such definitions and the results from the survey of user intentions, one should consider if many of the informational queries submitted to search engines should be thought of as at least partially navigational, where searchers expect their need served by a specific large web site, which they know holds the type of information they seek. Although (Bharat & Henzinger, 1998) showed the importance of ensuring the results of informational searches were authoritative, they did not suggest that a limited number of web sites would become authoritative for broad classes of queries, which the results of the questionnaire seem to suggest. However, the survey only points to indications of this type of navigational information searching. Users were asked about their intentions when searching, their actual querying behavior was not observed. What we can conclude is that the results of this survey support the need for a follow up study to examine user searching in the identified domains.
Arguably the greatest: sport fans and communities at work on Wikipedia +Wikipedia appears to provide a socially constructed space in which a celebration of symbols, myths, and events organized around athletes and teams facilitates interaction for an imagined community. Users at work on the sample athletes’ pages debated and celebrated the achievements of these athletes and then represented them through a process that is both contentious and conciliatory. Wikipedia appears to provide a unique means of communicating information and a gathering place for fans that have no clear group identifying characteristics beyond their participation in revising electronic sources of information. The structure of articles allows sports fans to quickly identify pertinent sport and social values in clearly formatted sections. This study examined the ways in which Wikipedia is a tool of communication for participants and an accessible narrative developed in an imagined social space that builds another location for community. Collective knowledge produced via Wikipedia’s processes results in articles that foster an imagined community of sporting fans. This imagined community is supported by reference to tradition and celebration of sporting events in the article narratives. The use of statistical information measures athletes against other athletic achievements and suspends players in webs of comparison. Both practices present a frame on which the imagined community can be mounted. The guidelines and practice of editing on Wikipedia condition the content, while interaction affirms the project. Participation in Wikipedia allows fans to pursue consensus on facts and events through collaborative editing and asynchronous communication. In the context of Wikipedia entries about sport, participants diffuse meaning to an imagined community by adapting portions of public narratives such as news reports into “encyclopedic” accounts. Through indirect and direct forms of communication, fans share narratives within Wikipedia pages.
Art history: a guide to basic research resources +In addition to database searching skills that can be applied to other disciplines, students learn valuable lessons through the research experience. The most important lesson is to keep trying, not to give up upon encountering the first obstacles, and not to expect instant gratification.
Assessing the value of cooperation in Wikipedia +We have shown that although Wikipedia is a complex system in which of millions of diverse editors collaborate in an unscheduled and virtually uncontrolled fashion, editing follows a very simple overall pattern. This pattern implies that a small number of articles, corresponding to topics of high relevance or visibility, accrete a disproportionately large number of edits. And, while large collaborations have been shown to fail in many contexts, Wikipedia article quality continues to increase, on average, as the number of collaborators and the number of edits increases. Thus, topics of high interest or relevance are naturally brought to the forefront of visibility and quality.
Assigning trust to Wikipedia content +The results on precision and recall, word longevity prediction, and trust distribution overall indicate that the trust we compute has indeed a predictive value with respect to future text stability. As mentioned in the introduction, this is an indication that the trust system provides valuable information; the visitors to our on-line demo seemed, in anedoctical fashion, to corroborate this finding.
Automatic vandalism detection in Wikipedia +Potthast et al. **** presented a new approach to detect Vandalism in Wikipedia based on Logistic Regression, a machine learning classification. algorithm. The classification task is accomplished based on various features extracted to quantify the characteristics of Vandalism in Wikipedia articles. These features include term frequency, character distribution, edit anonymity. This approach achieved 83% precision at 77% recall.
Automatic word sense disambiguation based on document networks +In the paper, a word sense disambiguation method based on document networks is described. The advan tages of the method are as follows: • coverage of a large portion of the natural lan guage, • easiness of understanding reasons for selecting a particular sense, • large coverage of possible senses (for the senses, both dictionary terms and cases of term use in texts are used), • the method is completely automatic. The disadvantage of the method is that preliminary processing of Wikipedia is required. Our experiments showed that accuracy of the method is comparable with that of systems described in the literature. Taking into account link types makes it possible to better calculate semantic relatedness between the terms, which is evidenced by the improve ment of accuracy and recall of the word sense disam biguation method. Moreover, the method was tested on different collections, which yields a more complete picture of results of the algorithm operation. The paper also discusses difficulties of comparison of the existing algorithms, which are due to the fact that the commonly accepted collection of test docu ments SenseEval is not suitable for comparing Wikipe dia based methods. An escape of this situation might be creation and support of a similar corpus on the basis of Wikipedia and adaptation of the existing methods to testing on such a collection.
Automatically refining the Wikipedia infobox ontology +Our experiments show that joint-inference dominates other methods, achieving an impressive 96.8% precision at 92.1% recall. The resulting ontology contains subsumption relations and schema mappings between Wikipedia’s infobox classes; additionally, it maps these classes to WordNet.
Automatising the learning of lexical patterns: an application to the enrichment of WordNet by extracting semantic relationships from Wikipedia +The algorithm has been evaluated with the whole Simple English Wikipedia entries, as available on September 27, 2005. Each of the entries was disambiguated using the procedure described in [63]. An evaluation of 360 entries, performed by two human judges, indicates that the precision of the disambiguation is 92% (87% for polysemous words). The high figure should not come as a surprise, given that, as can be expected, it is an easier problem to disambiguate the title of an encyclopedia entry (for which there exist much relevant data) than a word inside unrestricted text. The next step consisted in extracting, from each Wikipedia entry e, a list of sentences containing references to other entries f which are related with e inside WordNet. This resulted in 485 sentences for hyponymy, 213 for hyperonymy, 562 for holonymy and 509 for meronymy. When analysing these patterns, however, we found that, both for hyperonymy and meronymy, most of the sentences extracted only contained the name of the entry f (the target of the relationship) with no contextual information around it. The reason was unveiled by examining the web pages: • In the case of hyponyms and holonyms, it is very common to express the relationship with natural language, with expressions such as A dog is a mammal, or A wheel is part of a car. • On the other hand, when describing hyperonyms and meronyms, their hyponyms and holonyms are usually expressed with enumerations, which tend to be formatted as HTML bullet lists. Therefore, the sentence splitter chunks each hyponym and each holonym as belonging to a separate sentence. All the results in these experiments have been evaluated by hand by two judges. The total inter-judge agreement reached 95%. In order to unify the criteria, in the doubtful cases, similar relations were looked inside WordNet, and the judges tried to apply the same criteria as shown by those examples. The cases in which the judges disagree have not been taking into consideration for calculating the accuracy.
(previous 25) (next 25)