Browse wiki

Jump to: navigation, search
On the measurability of information quality
Abstract The notion of information quality (IQ) hasThe notion of information quality (IQ) has been investigated extensively in recent years. Much of this research has been aimed at conceptualizing IQ and its underlying dimensions (e.g., accuracy, completeness) and at developing instruments for measuring these quality dimensions. However, less attention has been given to the measurability of IQ. The objective of this study is to explore the extent to which a set of IQ dimensions—accuracy, completeness, objectivity, and representation—lend themselves to reliable measurement. By reliable measurement, we refer to the degree to which independent assessors are able to agree when rating objects on these various dimensions. Our study reveals that multiple assessors tend to agree more on certain dimensions (e.g., accuracy) while finding it more difficult to agree on others (e.g., completeness).We argue that differences in measurability stem from properties inherent to the quality dimension (i.e., the availability of heuristics that make the assessment more tangible) as well as on assessors’ reliance on these cues. Implications for theory and practice are discussed.ons for theory and practice are discussed.
Added by wikilit team No but verified  +
Collected data time dimension Cross-sectional  +
Conclusion "The notion of IQ is of primary concern to"The notion of IQ is of primary concern to information science scholars, and it has attracted significant attention in recent years.Various conceptualizations of IQ have been pro- posed, and most frameworks concur that IQ is a high-level construct that incorporates several dimensions (i.e., other constructs) such as accuracy and completeness. However, less attention has been given to the “measurability” (i.e., the ability to consistently measure) of IQ. Empirical studies of IQ often employ a survey to assess readers’ perceptions of a resource’s quality. Thus, the measurement of these quality constructs has been based on people’s perceptions or esti- mates of appropriateness. Often, studies of IQ assume that people’s abilities to perceive various dimensions are similar for all quality dimensions, and overlook issues of reliabil- ity of measurement. Findings from this study demonstrate the difficulty of reaching a consensus on IQ assessment, and reveal some important differences in agreement levels between these dimensions. Implications for Research and Practice Our findings have implications for both research and practice. The primary implication for information science scholars is the need for care in assessing IQ constructs. Using multiple items for constructs and ensuring the correlations between these items (i.e., ensuring construct validity) may not be sufficient, as there are likely to be inconsistencies between assessors in their perceptions of an object’s quality. Since some quality dimensions are more difficult for asses- sors to agree on than are others (i.e., accuracy, objectivity), it is recommended that future studies of IQ give extra attention to the measurement of these constructs. Possibly, assessors could be given more training and allowed more time in mak- ing judgments on these dimensions, survey questions could be more specific, and more assessors could be employed, including measurement of individual domain knowledge and task-expertise levels that affect an assessor’s ability to make judgments on the various IQ dimensions. For information users, wewould recommend care in judg- ing quality, and in accepting others’quality ratings, as quality is such a highly subjective concept. Information users should realize that they (often unconsciously) employ heuristics in assessing quality, and that these heuristics are limited in estimating quality dimensions such as accuracy and objec- tivity. While knowledge of available heuristics for various IQ dimensions may be very useful, users should be aware of the limitations of these heuristics and that they may pro- vide only a partial and somewhat limited indication of the overall quality of the object. Knowledge about these limi- tations also is important for information literacy education, where a greater focus might be placed on assessment tech- niques for those IQ dimensions less amenable to heuristic representation. Another practical recommendation is aimed at Web ser- vices that produce IQ metrics for their published content. These metrics often are based on users’ ratings. For exam- ple, many health-related Web sites have tools for estimating the quality ofWeb pages, and use symbols such as “award” or “seal” to indicate high-quality pages. These tools rarely report on the interrater reliability of the ratings (Gagliardi & Jadad, 2002). The lowagreement levels recorded in our study suggest that ratings from a relatively large number of users are required for producing a quality score. Moreover, the differences in agreement for the various dimensions imply that users should be allowed to rate an article along vari- ous dimensions, and that more care should be placed (e.g., provide more guidance, require more raters) on the dimen- sions that are difficult to assess: accuracy and objectivity.One example in this direction is provided by the Public Library of Science (PLoS) journals. In PLoS, readers can rate an arti- cle according to insight, reliability, and style as well as a check box where you can indicate if you have any competing interests with the article (i.e., objectivity). Our suggestion to PLoS would be to allow readers to rate the articles on additional dimensions such as accuracy and completeness, and to consider the variance in responses when producing an aggregate quality score. Included with this might be a dec- laration (i.e., a self-assessment) of the rater’s own level of expertise in the topical area addressed in the article being rated. Users of such services should be careful to accept quality scores without knowledge of what quality dimen- sions the score represents and the number of ratings used to generate it." (p. 97)r of ratings used to generate it." (p. 97)
Data source Wikipedia pages  +
Doi 10.1002/asi.21447 +
Google scholar url  +
Has author Ofer Arazy + , Rick Kopak +
Has domain Information science + , Information systems +
Has topic Comprehensiveness + , Readability and style + , Reliability + , Reader perceptions of credibility +
Issue 1  +
Pages 89-99  +
Peer reviewed Yes  +
Publication type Journal article  +
Published in Journal of the American Society for Information Science and Technology +
Research design Phenomenology  +
Research questions "Our aim is to investigate the measurabili"Our aim is to investigate the measurability of IQ constructs; that is, the extent to which existing scales of IQ dimensions lend themselves to consistent assessments by multiple judges. Our research question is whether there are some recognized dimensions of IQ that are inherently more reliable and that showless variation in terms of raters’ agreement levels." (p. 90)erms of raters’ agreement levels." (p. 90)
Revid 10,890  +
Theories Information Quality = Accuracy, Completeness, Objectivity, Representation, and Composite Information Quality
Theory type Design and action  +
Title On the Measurability of Information Quality
Unit of analysis N/A  +
Url  +
Volume 62  +
Wikipedia coverage Case  +
Wikipedia data extraction Live Wikipedia  +
Wikipedia language English  +
Wikipedia page type Article  +
Year 2011  +
Creation dateThis property is a special property in this wiki. 30 May 2012 04:57:03  +
Categories Comprehensiveness  + , Readability and style  + , Reliability  + , Reader perceptions of credibility  + , Information science  + , Information systems  + , Publications with missing comments  + , Publications  +
Modification dateThis property is a special property in this wiki. 30 January 2014 20:30:12  +
hide properties that link here 
  No properties link to this page.


Enter the name of the page to start browsing from.