On the measurability of information quality

From WikiLit
Jump to: navigation, search
Publication (help)
On the Measurability of Information Quality
Authors: Ofer Arazy, Rick Kopak [edit item]
Citation: Journal of the American Society for Information Science and Technology 62 (1): 89-99. 2011.
Publication type: Journal article
Peer-reviewed: Yes
Database(s):
DOI: 10.1002/asi.21447.
Google Scholar cites: Citations
Link(s): Paper link
Added by Wikilit team: No but verified
Search
Article: Google Scholar BASE PubMed
Other scholarly wikis: AcaWiki Brede Wiki WikiPapers
Web search: Bing Google Yahoo!Google PDF
Other:
Services
Format: BibTeX
On the Measurability of Information Quality is a publication by Ofer Arazy, Rick Kopak.


[edit] Abstract

The notion of information quality (IQ) has been investigated extensively in recent years. Much of this research has been aimed at conceptualizing IQ and its underlying dimensions (e.g., accuracy, completeness) and at developing instruments for measuring these quality dimensions. However, less attention has been given to the measurability of IQ. The objective of this study is to explore the extent to which a set of IQ dimensions—accuracy, completeness, objectivity, and representation—lend themselves to reliable measurement. By reliable measurement, we refer to the degree to which independent assessors are able to agree when rating objects on these various dimensions. Our study reveals that multiple assessors tend to agree more on certain dimensions (e.g., accuracy) while finding it more difficult to agree on others (e.g., completeness).We argue that differences in measurability stem from properties inherent to the quality dimension (i.e., the availability of heuristics that make the assessment more tangible) as well as on assessors’ reliance on these cues. Implications for theory and practice are discussed.

[edit] Research questions

""Our aim is to investigate the measurability of IQ constructs; that is, the extent to which existing scales of IQ dimensions lend themselves to consistent assessments by multiple judges. Our research question is whether there are some recognized dimensions of IQ that are inherently more reliable and that showless variation in terms of raters’ agreement levels." (p. 90)"

Research details

Topics: Comprehensiveness, Readability and style, Reliability, Reader perceptions of credibility [edit item]
Domains: Information science, Information systems [edit item]
Theory type: Design and action [edit item]
Wikipedia coverage: Case [edit item]
Theories: "Information Quality = Accuracy, Completeness, Objectivity, Representation, and Composite Information Quality" [edit item]
Research design: Phenomenology [edit item]
Data source: Wikipedia pages [edit item]
Collected data time dimension: Cross-sectional [edit item]
Unit of analysis: N/A [edit item]
Wikipedia data extraction: Live Wikipedia [edit item]
Wikipedia page type: Article [edit item]
Wikipedia language: English [edit item]

[edit] Conclusion

""The notion of IQ is of primary concern to information science scholars, and it has attracted significant attention in recent years.Various conceptualizations of IQ have been pro- posed, and most frameworks concur that IQ is a high-level construct that incorporates several dimensions (i.e., other constructs) such as accuracy and completeness. However, less attention has been given to the “measurability” (i.e., the ability to consistently measure) of IQ. Empirical studies of IQ often employ a survey to assess readers’ perceptions of a resource’s quality. Thus, the measurement of these quality constructs has been based on people’s perceptions or esti- mates of appropriateness. Often, studies of IQ assume that people’s abilities to perceive various dimensions are similar for all quality dimensions, and overlook issues of reliabil- ity of measurement. Findings from this study demonstrate the difficulty of reaching a consensus on IQ assessment, and reveal some important differences in agreement levels between these dimensions.

Implications for Research and Practice Our findings have implications for both research and practice. The primary implication for information science scholars is the need for care in assessing IQ constructs. Using multiple items for constructs and ensuring the correlations between these items (i.e., ensuring construct validity) may not be sufficient, as there are likely to be inconsistencies between assessors in their perceptions of an object’s quality. Since some quality dimensions are more difficult for asses- sors to agree on than are others (i.e., accuracy, objectivity), it is recommended that future studies of IQ give extra attention to the measurement of these constructs. Possibly, assessors could be given more training and allowed more time in mak- ing judgments on these dimensions, survey questions could be more specific, and more assessors could be employed, including measurement of individual domain knowledge and task-expertise levels that affect an assessor’s ability to make judgments on the various IQ dimensions. For information users, wewould recommend care in judg- ing quality, and in accepting others’quality ratings, as quality is such a highly subjective concept. Information users should realize that they (often unconsciously) employ heuristics in assessing quality, and that these heuristics are limited in estimating quality dimensions such as accuracy and objec- tivity. While knowledge of available heuristics for various IQ dimensions may be very useful, users should be aware of the limitations of these heuristics and that they may pro- vide only a partial and somewhat limited indication of the overall quality of the object. Knowledge about these limi- tations also is important for information literacy education, where a greater focus might be placed on assessment tech- niques for those IQ dimensions less amenable to heuristic representation. Another practical recommendation is aimed at Web ser- vices that produce IQ metrics for their published content. These metrics often are based on users’ ratings. For exam- ple, many health-related Web sites have tools for estimating the quality ofWeb pages, and use symbols such as “award” or “seal” to indicate high-quality pages. These tools rarely report on the interrater reliability of the ratings (Gagliardi & Jadad, 2002). The lowagreement levels recorded in our study suggest that ratings from a relatively large number of users are required for producing a quality score. Moreover, the differences in agreement for the various dimensions imply that users should be allowed to rate an article along vari- ous dimensions, and that more care should be placed (e.g., provide more guidance, require more raters) on the dimen- sions that are difficult to assess: accuracy and objectivity.One example in this direction is provided by the Public Library of Science (PLoS) journals. In PLoS, readers can rate an arti- cle according to insight, reliability, and style as well as a check box where you can indicate if you have any competing interests with the article (i.e., objectivity). Our suggestion to PLoS would be to allow readers to rate the articles on additional dimensions such as accuracy and completeness, and to consider the variance in responses when producing an aggregate quality score. Included with this might be a dec- laration (i.e., a self-assessment) of the rater’s own level of expertise in the topical area addressed in the article being rated. Users of such services should be careful to accept quality scores without knowledge of what quality dimen- sions the score represents and the number of ratings used to generate it." (p. 97)"

[edit] Comments


Further notes[edit]

Facts about "On the measurability of information quality"RDF feed
AbstractThe notion of information quality (IQ) hasThe notion of information quality (IQ) has been investigated extensively in recent years. Much of this research has been aimed at conceptualizing IQ and its underlying dimensions (e.g., accuracy, completeness) and at developing instruments for measuring these quality dimensions. However, less attention has been given to the measurability of IQ. The objective of this study is to explore the extent to which a set of IQ dimensions—accuracy, completeness, objectivity, and representation—lend themselves to reliable measurement. By reliable measurement, we refer to the degree to which independent assessors are able to agree when rating objects on these various dimensions. Our study reveals that multiple assessors tend to agree more on certain dimensions (e.g., accuracy) while finding it more difficult to agree on others (e.g., completeness).We argue that differences in measurability stem from properties inherent to the quality dimension (i.e., the availability of heuristics that make the assessment more tangible) as well as on assessors’ reliance on these cues. Implications for theory and practice are discussed.ons for theory and practice are discussed.
Added by wikilit teamNo but verified +
Collected data time dimensionCross-sectional +
Conclusion"The notion of IQ is of primary concern to"The notion of IQ is of primary concern to information

science scholars, and it has attracted significant attention in recent years.Various conceptualizations of IQ have been pro- posed, and most frameworks concur that IQ is a high-level construct that incorporates several dimensions (i.e., other constructs) such as accuracy and completeness. However, less attention has been given to the “measurability” (i.e., the ability to consistently measure) of IQ. Empirical studies of IQ often employ a survey to assess readers’ perceptions of a resource’s quality. Thus, the measurement of these quality constructs has been based on people’s perceptions or esti- mates of appropriateness. Often, studies of IQ assume that people’s abilities to perceive various dimensions are similar for all quality dimensions, and overlook issues of reliabil- ity of measurement. Findings from this study demonstrate the difficulty of reaching a consensus on IQ assessment, and reveal some important differences in agreement levels between these dimensions.

Implications for Research and Practice Our findings have implications for both research and practice. The primary implication for information science scholars is the need for care in assessing IQ constructs. Using multiple items for constructs and ensuring the correlations between these items (i.e., ensuring construct validity) may not be sufficient, as there are likely to be inconsistencies between assessors in their perceptions of an object’s quality. Since some quality dimensions are more difficult for asses- sors to agree on than are others (i.e., accuracy, objectivity), it is recommended that future studies of IQ give extra attention to the measurement of these constructs. Possibly, assessors could be given more training and allowed more time in mak- ing judgments on these dimensions, survey questions could be more specific, and more assessors could be employed, including measurement of individual domain knowledge and task-expertise levels that affect an assessor’s ability to make judgments on the various IQ dimensions. For information users, wewould recommend care in judg- ing quality, and in accepting others’quality ratings, as quality is such a highly subjective concept. Information users should realize that they (often unconsciously) employ heuristics in assessing quality, and that these heuristics are limited in estimating quality dimensions such as accuracy and objec- tivity. While knowledge of available heuristics for various IQ dimensions may be very useful, users should be aware of the limitations of these heuristics and that they may pro- vide only a partial and somewhat limited indication of the overall quality of the object. Knowledge about these limi- tations also is important for information literacy education, where a greater focus might be placed on assessment tech- niques for those IQ dimensions less amenable to heuristic representation. Another practical recommendation is aimed at Web ser- vices that produce IQ metrics for their published content. These metrics often are based on users’ ratings. For exam- ple, many health-related Web sites have tools for estimating the quality ofWeb pages, and use symbols such as “award” or “seal” to indicate high-quality pages. These tools rarely report on the interrater reliability of the ratings (Gagliardi & Jadad, 2002). The lowagreement levels recorded in our study suggest that ratings from a relatively large number of users are required for producing a quality score. Moreover, the differences in agreement for the various dimensions imply that users should be allowed to rate an article along vari- ous dimensions, and that more care should be placed (e.g., provide more guidance, require more raters) on the dimen- sions that are difficult to assess: accuracy and objectivity.One example in this direction is provided by the Public Library of Science (PLoS) journals. In PLoS, readers can rate an arti- cle according to insight, reliability, and style as well as a check box where you can indicate if you have any competing interests with the article (i.e., objectivity). Our suggestion to PLoS would be to allow readers to rate the articles on additional dimensions such as accuracy and completeness, and to consider the variance in responses when producing an aggregate quality score. Included with this might be a dec- laration (i.e., a self-assessment) of the rater’s own level of expertise in the topical area addressed in the article being rated. Users of such services should be careful to accept quality scores without knowledge of what quality dimen- sions the score represents and the number of ratings used to generate it." (p. 97)r of ratings used to

generate it." (p. 97)
Data sourceWikipedia pages +
Doi10.1002/asi.21447 +
Google scholar urlhttp://scholar.google.com/scholar?ie=UTF-8&q=%22On%2Bthe%2BMeasurability%2Bof%2BInformation%2BQuality%22 +
Has authorOfer Arazy + and Rick Kopak +
Has domainInformation science + and Information systems +
Has topicComprehensiveness +, Readability and style +, Reliability + and Reader perceptions of credibility +
Issue1 +
Pages89-99 +
Peer reviewedYes +
Publication typeJournal article +
Published inJournal of the American Society for Information Science and Technology +
Research designPhenomenology +
Research questions"Our aim is to investigate the measurabili"Our aim is to investigate the measurability of IQ constructs; that is, the extent to which existing scales of IQ dimensions lend themselves to consistent assessments by multiple judges. Our research question is whether there are some recognized dimensions of IQ that are inherently more reliable and that showless variation in terms of raters’ agreement levels." (p. 90)erms of raters’ agreement levels." (p. 90)
Revid10,890 +
TheoriesInformation Quality = Accuracy, Completeness, Objectivity, Representation, and Composite Information Quality
Theory typeDesign and action +
TitleOn the Measurability of Information Quality
Unit of analysisN/A +
Urlhttp://onlinelibrary.wiley.com/doi/10.1002/asi.21447/abstract +
Volume62 +
Wikipedia coverageCase +
Wikipedia data extractionLive Wikipedia +
Wikipedia languageEnglish +
Wikipedia page typeArticle +
Year2011 +